Jan 21 23:35:19.293508 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 21 23:35:19.293558 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Wed Jan 21 22:02:38 -00 2026 Jan 21 23:35:19.293586 kernel: KASLR disabled due to lack of seed Jan 21 23:35:19.293603 kernel: efi: EFI v2.7 by EDK II Jan 21 23:35:19.293620 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78557598 Jan 21 23:35:19.293636 kernel: secureboot: Secure boot disabled Jan 21 23:35:19.293654 kernel: ACPI: Early table checksum verification disabled Jan 21 23:35:19.293670 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 21 23:35:19.293687 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 21 23:35:19.293710 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 21 23:35:19.293727 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 21 23:35:19.293744 kernel: ACPI: FACS 0x0000000078630000 000040 Jan 21 23:35:19.293760 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 21 23:35:19.293776 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 21 23:35:19.293800 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 21 23:35:19.293817 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 21 23:35:19.293867 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 21 23:35:19.293886 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 21 23:35:19.293906 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 21 23:35:19.293924 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 21 23:35:19.293941 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 21 23:35:19.293959 kernel: printk: legacy bootconsole [uart0] enabled Jan 21 23:35:19.294062 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 21 23:35:19.294088 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 21 23:35:19.294113 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Jan 21 23:35:19.294132 kernel: Zone ranges: Jan 21 23:35:19.294149 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 21 23:35:19.294166 kernel: DMA32 empty Jan 21 23:35:19.294183 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 21 23:35:19.294201 kernel: Device empty Jan 21 23:35:19.294218 kernel: Movable zone start for each node Jan 21 23:35:19.294234 kernel: Early memory node ranges Jan 21 23:35:19.294252 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 21 23:35:19.294269 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 21 23:35:19.294286 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 21 23:35:19.294303 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 21 23:35:19.294326 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 21 23:35:19.294342 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 21 23:35:19.294360 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 21 23:35:19.294377 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 21 23:35:19.294402 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 21 23:35:19.294424 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 21 23:35:19.294443 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Jan 21 23:35:19.294503 kernel: psci: probing for conduit method from ACPI. Jan 21 23:35:19.294522 kernel: psci: PSCIv1.0 detected in firmware. Jan 21 23:35:19.294540 kernel: psci: Using standard PSCI v0.2 function IDs Jan 21 23:35:19.294558 kernel: psci: Trusted OS migration not required Jan 21 23:35:19.294576 kernel: psci: SMC Calling Convention v1.1 Jan 21 23:35:19.294594 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 21 23:35:19.294612 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 21 23:35:19.294637 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 21 23:35:19.294656 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 21 23:35:19.294673 kernel: Detected PIPT I-cache on CPU0 Jan 21 23:35:19.294708 kernel: CPU features: detected: GIC system register CPU interface Jan 21 23:35:19.294730 kernel: CPU features: detected: Spectre-v2 Jan 21 23:35:19.294749 kernel: CPU features: detected: Spectre-v3a Jan 21 23:35:19.294766 kernel: CPU features: detected: Spectre-BHB Jan 21 23:35:19.294784 kernel: CPU features: detected: ARM erratum 1742098 Jan 21 23:35:19.294802 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 21 23:35:19.294819 kernel: alternatives: applying boot alternatives Jan 21 23:35:19.294839 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ca60929099aca00ce2f86d3c34ded0cbc27315310cbe1bd1d91f864aae71550e Jan 21 23:35:19.294865 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 21 23:35:19.294883 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 21 23:35:19.294902 kernel: Fallback order for Node 0: 0 Jan 21 23:35:19.294920 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jan 21 23:35:19.294938 kernel: Policy zone: Normal Jan 21 23:35:19.294955 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 21 23:35:19.294999 kernel: software IO TLB: area num 2. Jan 21 23:35:19.295026 kernel: software IO TLB: mapped [mem 0x000000006f800000-0x0000000073800000] (64MB) Jan 21 23:35:19.295044 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 21 23:35:19.295062 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 21 23:35:19.295091 kernel: rcu: RCU event tracing is enabled. Jan 21 23:35:19.295110 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 21 23:35:19.295128 kernel: Trampoline variant of Tasks RCU enabled. Jan 21 23:35:19.295147 kernel: Tracing variant of Tasks RCU enabled. Jan 21 23:35:19.295166 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 21 23:35:19.295185 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 21 23:35:19.295203 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 21 23:35:19.295221 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 21 23:35:19.295239 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 21 23:35:19.295257 kernel: GICv3: 96 SPIs implemented Jan 21 23:35:19.295274 kernel: GICv3: 0 Extended SPIs implemented Jan 21 23:35:19.295299 kernel: Root IRQ handler: gic_handle_irq Jan 21 23:35:19.295317 kernel: GICv3: GICv3 features: 16 PPIs Jan 21 23:35:19.295334 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 21 23:35:19.295352 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 21 23:35:19.295370 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 21 23:35:19.295388 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jan 21 23:35:19.295407 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jan 21 23:35:19.295425 kernel: GICv3: using LPI property table @0x0000000400110000 Jan 21 23:35:19.295443 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 21 23:35:19.295460 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jan 21 23:35:19.295478 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 21 23:35:19.295501 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 21 23:35:19.295519 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 21 23:35:19.295537 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 21 23:35:19.295556 kernel: Console: colour dummy device 80x25 Jan 21 23:35:19.295575 kernel: printk: legacy console [tty1] enabled Jan 21 23:35:19.295594 kernel: ACPI: Core revision 20240827 Jan 21 23:35:19.295614 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 21 23:35:19.295633 kernel: pid_max: default: 32768 minimum: 301 Jan 21 23:35:19.295658 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 21 23:35:19.295679 kernel: landlock: Up and running. Jan 21 23:35:19.295697 kernel: SELinux: Initializing. Jan 21 23:35:19.295715 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 21 23:35:19.295734 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 21 23:35:19.295752 kernel: rcu: Hierarchical SRCU implementation. Jan 21 23:35:19.295771 kernel: rcu: Max phase no-delay instances is 400. Jan 21 23:35:19.295790 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 21 23:35:19.295815 kernel: Remapping and enabling EFI services. Jan 21 23:35:19.295833 kernel: smp: Bringing up secondary CPUs ... Jan 21 23:35:19.295851 kernel: Detected PIPT I-cache on CPU1 Jan 21 23:35:19.295870 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 21 23:35:19.295889 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jan 21 23:35:19.295908 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 21 23:35:19.295926 kernel: smp: Brought up 1 node, 2 CPUs Jan 21 23:35:19.295949 kernel: SMP: Total of 2 processors activated. Jan 21 23:35:19.295968 kernel: CPU: All CPU(s) started at EL1 Jan 21 23:35:19.296109 kernel: CPU features: detected: 32-bit EL0 Support Jan 21 23:35:19.296136 kernel: CPU features: detected: 32-bit EL1 Support Jan 21 23:35:19.296155 kernel: CPU features: detected: CRC32 instructions Jan 21 23:35:19.296175 kernel: alternatives: applying system-wide alternatives Jan 21 23:35:19.296196 kernel: Memory: 3823468K/4030464K available (11200K kernel code, 2458K rwdata, 9088K rodata, 12416K init, 1038K bss, 185652K reserved, 16384K cma-reserved) Jan 21 23:35:19.296216 kernel: devtmpfs: initialized Jan 21 23:35:19.296241 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 21 23:35:19.296261 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 21 23:35:19.296281 kernel: 23664 pages in range for non-PLT usage Jan 21 23:35:19.296300 kernel: 515184 pages in range for PLT usage Jan 21 23:35:19.296320 kernel: pinctrl core: initialized pinctrl subsystem Jan 21 23:35:19.296345 kernel: SMBIOS 3.0.0 present. Jan 21 23:35:19.296365 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 21 23:35:19.296386 kernel: DMI: Memory slots populated: 0/0 Jan 21 23:35:19.296406 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 21 23:35:19.296425 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 21 23:35:19.296445 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 21 23:35:19.296465 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 21 23:35:19.296491 kernel: audit: initializing netlink subsys (disabled) Jan 21 23:35:19.296511 kernel: audit: type=2000 audit(0.239:1): state=initialized audit_enabled=0 res=1 Jan 21 23:35:19.296531 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 21 23:35:19.296550 kernel: cpuidle: using governor menu Jan 21 23:35:19.296570 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 21 23:35:19.296589 kernel: ASID allocator initialised with 65536 entries Jan 21 23:35:19.296609 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 21 23:35:19.296634 kernel: Serial: AMBA PL011 UART driver Jan 21 23:35:19.296654 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 21 23:35:19.296673 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 21 23:35:19.296693 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 21 23:35:19.296713 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 21 23:35:19.296732 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 21 23:35:19.296754 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 21 23:35:19.296781 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 21 23:35:19.296801 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 21 23:35:19.296821 kernel: ACPI: Added _OSI(Module Device) Jan 21 23:35:19.296841 kernel: ACPI: Added _OSI(Processor Device) Jan 21 23:35:19.296860 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 21 23:35:19.296880 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 21 23:35:19.296899 kernel: ACPI: Interpreter enabled Jan 21 23:35:19.296924 kernel: ACPI: Using GIC for interrupt routing Jan 21 23:35:19.296944 kernel: ACPI: MCFG table detected, 1 entries Jan 21 23:35:19.296963 kernel: ACPI: CPU0 has been hot-added Jan 21 23:35:19.297047 kernel: ACPI: CPU1 has been hot-added Jan 21 23:35:19.297075 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 21 23:35:19.297488 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 21 23:35:19.297841 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 21 23:35:19.298360 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 21 23:35:19.298792 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 21 23:35:19.299269 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 21 23:35:19.299320 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 21 23:35:19.299358 kernel: acpiphp: Slot [1] registered Jan 21 23:35:19.299391 kernel: acpiphp: Slot [2] registered Jan 21 23:35:19.299438 kernel: acpiphp: Slot [3] registered Jan 21 23:35:19.299464 kernel: acpiphp: Slot [4] registered Jan 21 23:35:19.299498 kernel: acpiphp: Slot [5] registered Jan 21 23:35:19.299519 kernel: acpiphp: Slot [6] registered Jan 21 23:35:19.299554 kernel: acpiphp: Slot [7] registered Jan 21 23:35:19.299576 kernel: acpiphp: Slot [8] registered Jan 21 23:35:19.299612 kernel: acpiphp: Slot [9] registered Jan 21 23:35:19.299633 kernel: acpiphp: Slot [10] registered Jan 21 23:35:19.299704 kernel: acpiphp: Slot [11] registered Jan 21 23:35:19.299738 kernel: acpiphp: Slot [12] registered Jan 21 23:35:19.299762 kernel: acpiphp: Slot [13] registered Jan 21 23:35:19.299798 kernel: acpiphp: Slot [14] registered Jan 21 23:35:19.299832 kernel: acpiphp: Slot [15] registered Jan 21 23:35:19.299855 kernel: acpiphp: Slot [16] registered Jan 21 23:35:19.299889 kernel: acpiphp: Slot [17] registered Jan 21 23:35:19.299932 kernel: acpiphp: Slot [18] registered Jan 21 23:35:19.299953 kernel: acpiphp: Slot [19] registered Jan 21 23:35:19.300020 kernel: acpiphp: Slot [20] registered Jan 21 23:35:19.300045 kernel: acpiphp: Slot [21] registered Jan 21 23:35:19.300081 kernel: acpiphp: Slot [22] registered Jan 21 23:35:19.300116 kernel: acpiphp: Slot [23] registered Jan 21 23:35:19.300138 kernel: acpiphp: Slot [24] registered Jan 21 23:35:19.300179 kernel: acpiphp: Slot [25] registered Jan 21 23:35:19.300214 kernel: acpiphp: Slot [26] registered Jan 21 23:35:19.300235 kernel: acpiphp: Slot [27] registered Jan 21 23:35:19.300271 kernel: acpiphp: Slot [28] registered Jan 21 23:35:19.300304 kernel: acpiphp: Slot [29] registered Jan 21 23:35:19.300326 kernel: acpiphp: Slot [30] registered Jan 21 23:35:19.300360 kernel: acpiphp: Slot [31] registered Jan 21 23:35:19.300381 kernel: PCI host bridge to bus 0000:00 Jan 21 23:35:19.300854 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 21 23:35:19.309600 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 21 23:35:19.310185 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 21 23:35:19.310651 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 21 23:35:19.311267 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jan 21 23:35:19.311856 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jan 21 23:35:19.312206 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jan 21 23:35:19.312509 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jan 21 23:35:19.312803 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jan 21 23:35:19.313181 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 21 23:35:19.313536 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jan 21 23:35:19.313876 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jan 21 23:35:19.314240 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jan 21 23:35:19.314577 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jan 21 23:35:19.314882 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 21 23:35:19.320209 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 21 23:35:19.320610 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 21 23:35:19.320919 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 21 23:35:19.320956 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 21 23:35:19.321008 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 21 23:35:19.321035 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 21 23:35:19.321056 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 21 23:35:19.321076 kernel: iommu: Default domain type: Translated Jan 21 23:35:19.321107 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 21 23:35:19.321126 kernel: efivars: Registered efivars operations Jan 21 23:35:19.321145 kernel: vgaarb: loaded Jan 21 23:35:19.321165 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 21 23:35:19.321185 kernel: VFS: Disk quotas dquot_6.6.0 Jan 21 23:35:19.321205 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 21 23:35:19.321225 kernel: pnp: PnP ACPI init Jan 21 23:35:19.321543 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 21 23:35:19.321576 kernel: pnp: PnP ACPI: found 1 devices Jan 21 23:35:19.321597 kernel: NET: Registered PF_INET protocol family Jan 21 23:35:19.321617 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 21 23:35:19.321637 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 21 23:35:19.321658 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 21 23:35:19.321678 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 21 23:35:19.321706 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 21 23:35:19.321727 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 21 23:35:19.321748 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 21 23:35:19.321770 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 21 23:35:19.321793 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 21 23:35:19.321814 kernel: PCI: CLS 0 bytes, default 64 Jan 21 23:35:19.321861 kernel: kvm [1]: HYP mode not available Jan 21 23:35:19.321891 kernel: Initialise system trusted keyrings Jan 21 23:35:19.321914 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 21 23:35:19.321934 kernel: Key type asymmetric registered Jan 21 23:35:19.321955 kernel: Asymmetric key parser 'x509' registered Jan 21 23:35:19.322266 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 21 23:35:19.322302 kernel: io scheduler mq-deadline registered Jan 21 23:35:19.322322 kernel: io scheduler kyber registered Jan 21 23:35:19.322350 kernel: io scheduler bfq registered Jan 21 23:35:19.324560 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 21 23:35:19.324614 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 21 23:35:19.324635 kernel: ACPI: button: Power Button [PWRB] Jan 21 23:35:19.324655 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 21 23:35:19.324675 kernel: ACPI: button: Sleep Button [SLPB] Jan 21 23:35:19.324708 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 21 23:35:19.324731 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 21 23:35:19.329380 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 21 23:35:19.329431 kernel: printk: legacy console [ttyS0] disabled Jan 21 23:35:19.329452 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 21 23:35:19.329472 kernel: printk: legacy console [ttyS0] enabled Jan 21 23:35:19.329491 kernel: printk: legacy bootconsole [uart0] disabled Jan 21 23:35:19.329520 kernel: thunder_xcv, ver 1.0 Jan 21 23:35:19.329539 kernel: thunder_bgx, ver 1.0 Jan 21 23:35:19.329559 kernel: nicpf, ver 1.0 Jan 21 23:35:19.329579 kernel: nicvf, ver 1.0 Jan 21 23:35:19.329969 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 21 23:35:19.332387 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-21T23:35:15 UTC (1769038515) Jan 21 23:35:19.332420 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 21 23:35:19.332450 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jan 21 23:35:19.332470 kernel: NET: Registered PF_INET6 protocol family Jan 21 23:35:19.332489 kernel: watchdog: NMI not fully supported Jan 21 23:35:19.332509 kernel: watchdog: Hard watchdog permanently disabled Jan 21 23:35:19.332528 kernel: Segment Routing with IPv6 Jan 21 23:35:19.332547 kernel: In-situ OAM (IOAM) with IPv6 Jan 21 23:35:19.332566 kernel: NET: Registered PF_PACKET protocol family Jan 21 23:35:19.332591 kernel: Key type dns_resolver registered Jan 21 23:35:19.332611 kernel: registered taskstats version 1 Jan 21 23:35:19.332630 kernel: Loading compiled-in X.509 certificates Jan 21 23:35:19.332650 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 665f7ea56fc50c946d7b42db233309a1abf7475f' Jan 21 23:35:19.332669 kernel: Demotion targets for Node 0: null Jan 21 23:35:19.332688 kernel: Key type .fscrypt registered Jan 21 23:35:19.332707 kernel: Key type fscrypt-provisioning registered Jan 21 23:35:19.332731 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 21 23:35:19.332751 kernel: ima: Allocated hash algorithm: sha1 Jan 21 23:35:19.332771 kernel: ima: No architecture policies found Jan 21 23:35:19.332790 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 21 23:35:19.332809 kernel: clk: Disabling unused clocks Jan 21 23:35:19.332828 kernel: PM: genpd: Disabling unused power domains Jan 21 23:35:19.332847 kernel: Freeing unused kernel memory: 12416K Jan 21 23:35:19.332868 kernel: Run /init as init process Jan 21 23:35:19.332891 kernel: with arguments: Jan 21 23:35:19.332910 kernel: /init Jan 21 23:35:19.332929 kernel: with environment: Jan 21 23:35:19.332947 kernel: HOME=/ Jan 21 23:35:19.332967 kernel: TERM=linux Jan 21 23:35:19.333020 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 21 23:35:19.333296 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 21 23:35:19.333521 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 21 23:35:19.333551 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 21 23:35:19.333571 kernel: GPT:25804799 != 33554431 Jan 21 23:35:19.333589 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 21 23:35:19.333609 kernel: GPT:25804799 != 33554431 Jan 21 23:35:19.333628 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 21 23:35:19.333654 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 21 23:35:19.333673 kernel: SCSI subsystem initialized Jan 21 23:35:19.333693 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 21 23:35:19.333712 kernel: device-mapper: uevent: version 1.0.3 Jan 21 23:35:19.333732 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 21 23:35:19.333752 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 21 23:35:19.333771 kernel: raid6: neonx8 gen() 6588 MB/s Jan 21 23:35:19.333794 kernel: raid6: neonx4 gen() 6590 MB/s Jan 21 23:35:19.333814 kernel: raid6: neonx2 gen() 5458 MB/s Jan 21 23:35:19.333860 kernel: raid6: neonx1 gen() 3940 MB/s Jan 21 23:35:19.333881 kernel: raid6: int64x8 gen() 3647 MB/s Jan 21 23:35:19.333900 kernel: raid6: int64x4 gen() 3723 MB/s Jan 21 23:35:19.333919 kernel: raid6: int64x2 gen() 3610 MB/s Jan 21 23:35:19.333938 kernel: raid6: int64x1 gen() 2721 MB/s Jan 21 23:35:19.333963 kernel: raid6: using algorithm neonx4 gen() 6590 MB/s Jan 21 23:35:19.336063 kernel: raid6: .... xor() 4625 MB/s, rmw enabled Jan 21 23:35:19.336096 kernel: raid6: using neon recovery algorithm Jan 21 23:35:19.336117 kernel: xor: measuring software checksum speed Jan 21 23:35:19.336137 kernel: 8regs : 13020 MB/sec Jan 21 23:35:19.336157 kernel: 32regs : 13079 MB/sec Jan 21 23:35:19.336177 kernel: arm64_neon : 8920 MB/sec Jan 21 23:35:19.336210 kernel: xor: using function: 32regs (13079 MB/sec) Jan 21 23:35:19.336231 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 21 23:35:19.336251 kernel: BTRFS: device fsid 297897fd-6303-44b2-8c75-36ebd35c694f devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (221) Jan 21 23:35:19.336271 kernel: BTRFS info (device dm-0): first mount of filesystem 297897fd-6303-44b2-8c75-36ebd35c694f Jan 21 23:35:19.336291 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 21 23:35:19.336313 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 21 23:35:19.336333 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 21 23:35:19.336360 kernel: BTRFS info (device dm-0): enabling free space tree Jan 21 23:35:19.336381 kernel: loop: module loaded Jan 21 23:35:19.336402 kernel: loop0: detected capacity change from 0 to 91488 Jan 21 23:35:19.336421 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 21 23:35:19.336444 systemd[1]: Successfully made /usr/ read-only. Jan 21 23:35:19.336472 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 21 23:35:19.336501 systemd[1]: Detected virtualization amazon. Jan 21 23:35:19.336522 systemd[1]: Detected architecture arm64. Jan 21 23:35:19.336546 systemd[1]: Running in initrd. Jan 21 23:35:19.336570 systemd[1]: No hostname configured, using default hostname. Jan 21 23:35:19.336594 systemd[1]: Hostname set to . Jan 21 23:35:19.336616 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 21 23:35:19.336637 systemd[1]: Queued start job for default target initrd.target. Jan 21 23:35:19.336664 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 21 23:35:19.336686 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 21 23:35:19.336708 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 21 23:35:19.336732 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 21 23:35:19.336754 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 21 23:35:19.336806 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 21 23:35:19.336830 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 21 23:35:19.336855 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 21 23:35:19.336877 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 21 23:35:19.336901 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 21 23:35:19.336929 systemd[1]: Reached target paths.target - Path Units. Jan 21 23:35:19.336951 systemd[1]: Reached target slices.target - Slice Units. Jan 21 23:35:19.337008 systemd[1]: Reached target swap.target - Swaps. Jan 21 23:35:19.337042 systemd[1]: Reached target timers.target - Timer Units. Jan 21 23:35:19.337069 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 21 23:35:19.337091 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 21 23:35:19.337116 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 21 23:35:19.337150 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 21 23:35:19.337172 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 21 23:35:19.337195 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 21 23:35:19.337219 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 21 23:35:19.337245 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 21 23:35:19.337269 systemd[1]: Reached target sockets.target - Socket Units. Jan 21 23:35:19.337293 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 21 23:35:19.337321 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 21 23:35:19.337344 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 21 23:35:19.337366 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 21 23:35:19.337389 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 21 23:35:19.337411 systemd[1]: Starting systemd-fsck-usr.service... Jan 21 23:35:19.337432 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 21 23:35:19.337454 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 21 23:35:19.337482 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 23:35:19.337503 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 21 23:35:19.337530 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 21 23:35:19.337553 systemd[1]: Finished systemd-fsck-usr.service. Jan 21 23:35:19.337575 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 21 23:35:19.337598 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 21 23:35:19.337619 kernel: Bridge firewalling registered Jan 21 23:35:19.337646 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 21 23:35:19.337749 systemd-journald[360]: Collecting audit messages is enabled. Jan 21 23:35:19.337799 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 21 23:35:19.337852 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 21 23:35:19.337877 kernel: audit: type=1130 audit(1769038519.300:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:19.337900 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 21 23:35:19.337922 systemd-journald[360]: Journal started Jan 21 23:35:19.337959 systemd-journald[360]: Runtime Journal (/run/log/journal/ec22f76a82c15b962fff2fb29425a4f2) is 8M, max 75.3M, 67.3M free. Jan 21 23:35:19.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:19.265154 systemd-modules-load[361]: Inserted module 'br_netfilter' Jan 21 23:35:19.350237 systemd[1]: Started systemd-journald.service - Journal Service. Jan 21 23:35:19.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:19.362064 kernel: audit: type=1130 audit(1769038519.350:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:19.368277 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 23:35:19.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:19.381341 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 21 23:35:19.396374 kernel: audit: type=1130 audit(1769038519.371:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:19.399641 kernel: audit: type=1130 audit(1769038519.383:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:19.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:19.392576 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 21 23:35:19.408000 audit: BPF prog-id=6 op=LOAD Jan 21 23:35:19.411562 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 21 23:35:19.418090 kernel: audit: type=1334 audit(1769038519.408:6): prog-id=6 op=LOAD Jan 21 23:35:19.419575 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 21 23:35:19.430301 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 21 23:35:19.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:19.444263 kernel: audit: type=1130 audit(1769038519.433:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:19.458370 systemd-tmpfiles[387]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 21 23:35:19.478333 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 21 23:35:19.492145 kernel: audit: type=1130 audit(1769038519.482:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:19.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:19.498569 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 21 23:35:19.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:19.521015 kernel: audit: type=1130 audit(1769038519.504:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:19.521334 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 21 23:35:19.609382 dracut-cmdline[402]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ca60929099aca00ce2f86d3c34ded0cbc27315310cbe1bd1d91f864aae71550e Jan 21 23:35:19.637350 systemd-resolved[386]: Positive Trust Anchors: Jan 21 23:35:19.637391 systemd-resolved[386]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 21 23:35:19.637401 systemd-resolved[386]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 21 23:35:19.637467 systemd-resolved[386]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 21 23:35:19.893014 kernel: Loading iSCSI transport class v2.0-870. Jan 21 23:35:19.913100 kernel: random: crng init done Jan 21 23:35:19.936898 systemd-resolved[386]: Defaulting to hostname 'linux'. Jan 21 23:35:19.942104 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 21 23:35:19.954317 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 21 23:35:19.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:19.965014 kernel: audit: type=1130 audit(1769038519.953:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:19.986035 kernel: iscsi: registered transport (tcp) Jan 21 23:35:20.045676 kernel: iscsi: registered transport (qla4xxx) Jan 21 23:35:20.045764 kernel: QLogic iSCSI HBA Driver Jan 21 23:35:20.115435 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 21 23:35:20.145765 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 21 23:35:20.147030 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 21 23:35:20.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:20.166029 kernel: audit: type=1130 audit(1769038520.144:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:20.251097 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 21 23:35:20.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:20.257854 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 21 23:35:20.260366 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 21 23:35:20.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:20.343000 audit: BPF prog-id=7 op=LOAD Jan 21 23:35:20.338418 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 21 23:35:20.348000 audit: BPF prog-id=8 op=LOAD Jan 21 23:35:20.350449 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 21 23:35:20.418170 systemd-udevd[642]: Using default interface naming scheme 'v257'. Jan 21 23:35:20.439760 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 21 23:35:20.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:20.456563 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 21 23:35:20.513079 dracut-pre-trigger[695]: rd.md=0: removing MD RAID activation Jan 21 23:35:20.559100 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 21 23:35:20.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:20.565000 audit: BPF prog-id=9 op=LOAD Jan 21 23:35:20.568211 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 21 23:35:20.605898 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 21 23:35:20.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:20.616232 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 21 23:35:20.679794 systemd-networkd[769]: lo: Link UP Jan 21 23:35:20.680359 systemd-networkd[769]: lo: Gained carrier Jan 21 23:35:20.687226 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 21 23:35:20.698618 systemd[1]: Reached target network.target - Network. Jan 21 23:35:20.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:20.792657 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 21 23:35:20.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:20.801854 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 21 23:35:21.060890 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 21 23:35:21.061006 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 21 23:35:21.075611 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 21 23:35:21.082019 kernel: nvme nvme0: using unchecked data buffer Jan 21 23:35:21.082398 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 21 23:35:21.085155 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 23:35:21.098908 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 21 23:35:21.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:21.091099 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 23:35:21.103872 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 23:35:21.117153 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:a1:29:9b:69:df Jan 21 23:35:21.121268 (udev-worker)[800]: Network interface NamePolicy= disabled on kernel command line. Jan 21 23:35:21.139943 systemd-networkd[769]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 23:35:21.140707 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 21 23:35:21.167013 systemd-networkd[769]: eth0: Link UP Jan 21 23:35:21.168318 systemd-networkd[769]: eth0: Gained carrier Jan 21 23:35:21.168350 systemd-networkd[769]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 23:35:21.192114 systemd-networkd[769]: eth0: DHCPv4 address 172.31.29.34/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 21 23:35:21.200205 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 23:35:21.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:21.322712 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 21 23:35:21.362942 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 21 23:35:21.421721 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 21 23:35:21.434590 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 21 23:35:21.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:21.467525 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 21 23:35:21.490732 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 21 23:35:21.496943 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 21 23:35:21.507267 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 21 23:35:21.510261 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 21 23:35:21.520382 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 21 23:35:21.552936 disk-uuid[913]: Primary Header is updated. Jan 21 23:35:21.552936 disk-uuid[913]: Secondary Entries is updated. Jan 21 23:35:21.552936 disk-uuid[913]: Secondary Header is updated. Jan 21 23:35:21.604777 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 21 23:35:21.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:22.482126 systemd-networkd[769]: eth0: Gained IPv6LL Jan 21 23:35:22.689378 disk-uuid[916]: Warning: The kernel is still using the old partition table. Jan 21 23:35:22.689378 disk-uuid[916]: The new table will be used at the next reboot or after you Jan 21 23:35:22.689378 disk-uuid[916]: run partprobe(8) or kpartx(8) Jan 21 23:35:22.689378 disk-uuid[916]: The operation has completed successfully. Jan 21 23:35:22.703636 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 21 23:35:22.704246 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 21 23:35:22.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:22.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:22.718235 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 21 23:35:22.782027 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1097) Jan 21 23:35:22.786169 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 34fe2ffc-17d8-4635-9624-842bf41e4932 Jan 21 23:35:22.786237 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 21 23:35:22.820022 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 21 23:35:22.820100 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 21 23:35:22.830035 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 34fe2ffc-17d8-4635-9624-842bf41e4932 Jan 21 23:35:22.831480 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 21 23:35:22.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:22.840313 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 21 23:35:24.205107 ignition[1116]: Ignition 2.22.0 Jan 21 23:35:24.205679 ignition[1116]: Stage: fetch-offline Jan 21 23:35:24.208232 ignition[1116]: no configs at "/usr/lib/ignition/base.d" Jan 21 23:35:24.211429 ignition[1116]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 21 23:35:24.215098 ignition[1116]: Ignition finished successfully Jan 21 23:35:24.219494 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 21 23:35:24.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:24.228936 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 21 23:35:24.283297 ignition[1125]: Ignition 2.22.0 Jan 21 23:35:24.283776 ignition[1125]: Stage: fetch Jan 21 23:35:24.284389 ignition[1125]: no configs at "/usr/lib/ignition/base.d" Jan 21 23:35:24.284411 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 21 23:35:24.284563 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 21 23:35:24.305635 ignition[1125]: PUT result: OK Jan 21 23:35:24.309567 ignition[1125]: parsed url from cmdline: "" Jan 21 23:35:24.309584 ignition[1125]: no config URL provided Jan 21 23:35:24.309600 ignition[1125]: reading system config file "/usr/lib/ignition/user.ign" Jan 21 23:35:24.309918 ignition[1125]: no config at "/usr/lib/ignition/user.ign" Jan 21 23:35:24.310003 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 21 23:35:24.319588 ignition[1125]: PUT result: OK Jan 21 23:35:24.319767 ignition[1125]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 21 23:35:24.325245 ignition[1125]: GET result: OK Jan 21 23:35:24.325826 ignition[1125]: parsing config with SHA512: 1f07ab99581246f412191a1ece4202d1b7b7ae12e90be721362363de6f19eb15b4c87a771209ec7c0560a1cc94b5bab52f5c8e3e50c64979f7c3443b7a9ea08f Jan 21 23:35:24.339560 unknown[1125]: fetched base config from "system" Jan 21 23:35:24.340043 unknown[1125]: fetched base config from "system" Jan 21 23:35:24.340778 ignition[1125]: fetch: fetch complete Jan 21 23:35:24.340059 unknown[1125]: fetched user config from "aws" Jan 21 23:35:24.340792 ignition[1125]: fetch: fetch passed Jan 21 23:35:24.340906 ignition[1125]: Ignition finished successfully Jan 21 23:35:24.355590 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 21 23:35:24.363417 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 21 23:35:24.367459 kernel: kauditd_printk_skb: 18 callbacks suppressed Jan 21 23:35:24.367503 kernel: audit: type=1130 audit(1769038524.360:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:24.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:24.423381 ignition[1132]: Ignition 2.22.0 Jan 21 23:35:24.423414 ignition[1132]: Stage: kargs Jan 21 23:35:24.423968 ignition[1132]: no configs at "/usr/lib/ignition/base.d" Jan 21 23:35:24.424020 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 21 23:35:24.424185 ignition[1132]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 21 23:35:24.427500 ignition[1132]: PUT result: OK Jan 21 23:35:24.440671 ignition[1132]: kargs: kargs passed Jan 21 23:35:24.441026 ignition[1132]: Ignition finished successfully Jan 21 23:35:24.448818 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 21 23:35:24.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:24.454967 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 21 23:35:24.468032 kernel: audit: type=1130 audit(1769038524.451:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:24.509138 ignition[1138]: Ignition 2.22.0 Jan 21 23:35:24.509169 ignition[1138]: Stage: disks Jan 21 23:35:24.509787 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Jan 21 23:35:24.509825 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 21 23:35:24.510017 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 21 23:35:24.523246 ignition[1138]: PUT result: OK Jan 21 23:35:24.528339 ignition[1138]: disks: disks passed Jan 21 23:35:24.528450 ignition[1138]: Ignition finished successfully Jan 21 23:35:24.535120 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 21 23:35:24.547453 kernel: audit: type=1130 audit(1769038524.534:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:24.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:24.536207 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 21 23:35:24.547553 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 21 23:35:24.548216 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 21 23:35:24.548754 systemd[1]: Reached target sysinit.target - System Initialization. Jan 21 23:35:24.549480 systemd[1]: Reached target basic.target - Basic System. Jan 21 23:35:24.551571 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 21 23:35:24.656758 systemd-fsck[1146]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 21 23:35:24.661235 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 21 23:35:24.670754 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 21 23:35:24.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:24.683036 kernel: audit: type=1130 audit(1769038524.666:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:24.871452 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f91073cf-b203-416d-af86-ee4485629886 r/w with ordered data mode. Quota mode: none. Jan 21 23:35:24.872497 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 21 23:35:24.877148 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 21 23:35:24.883015 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 21 23:35:24.890407 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 21 23:35:24.900243 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 21 23:35:24.900350 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 21 23:35:24.900409 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 21 23:35:24.930468 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 21 23:35:24.935695 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 21 23:35:24.961026 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1165) Jan 21 23:35:24.966365 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 34fe2ffc-17d8-4635-9624-842bf41e4932 Jan 21 23:35:24.966446 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 21 23:35:24.973537 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 21 23:35:24.973647 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 21 23:35:24.976880 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 21 23:35:25.771539 initrd-setup-root[1189]: cut: /sysroot/etc/passwd: No such file or directory Jan 21 23:35:25.781545 initrd-setup-root[1196]: cut: /sysroot/etc/group: No such file or directory Jan 21 23:35:25.791413 initrd-setup-root[1203]: cut: /sysroot/etc/shadow: No such file or directory Jan 21 23:35:25.802311 initrd-setup-root[1210]: cut: /sysroot/etc/gshadow: No such file or directory Jan 21 23:35:26.731592 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 21 23:35:26.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:26.744131 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 21 23:35:26.749588 kernel: audit: type=1130 audit(1769038526.730:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:26.749399 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 21 23:35:26.787016 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 34fe2ffc-17d8-4635-9624-842bf41e4932 Jan 21 23:35:26.788446 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 21 23:35:26.827132 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 21 23:35:26.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:26.840029 kernel: audit: type=1130 audit(1769038526.832:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:26.861018 ignition[1279]: INFO : Ignition 2.22.0 Jan 21 23:35:26.861018 ignition[1279]: INFO : Stage: mount Jan 21 23:35:26.865128 ignition[1279]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 21 23:35:26.865128 ignition[1279]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 21 23:35:26.865128 ignition[1279]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 21 23:35:26.875367 ignition[1279]: INFO : PUT result: OK Jan 21 23:35:26.880401 ignition[1279]: INFO : mount: mount passed Jan 21 23:35:26.883449 ignition[1279]: INFO : Ignition finished successfully Jan 21 23:35:26.884535 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 21 23:35:26.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:26.897635 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 21 23:35:26.901121 kernel: audit: type=1130 audit(1769038526.887:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:26.929124 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 21 23:35:26.977013 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1290) Jan 21 23:35:26.982343 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 34fe2ffc-17d8-4635-9624-842bf41e4932 Jan 21 23:35:26.982427 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 21 23:35:26.989786 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 21 23:35:26.989940 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 21 23:35:26.994207 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 21 23:35:27.048047 ignition[1307]: INFO : Ignition 2.22.0 Jan 21 23:35:27.048047 ignition[1307]: INFO : Stage: files Jan 21 23:35:27.052136 ignition[1307]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 21 23:35:27.052136 ignition[1307]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 21 23:35:27.052136 ignition[1307]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 21 23:35:27.062771 ignition[1307]: INFO : PUT result: OK Jan 21 23:35:27.068577 ignition[1307]: DEBUG : files: compiled without relabeling support, skipping Jan 21 23:35:27.073309 ignition[1307]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 21 23:35:27.073309 ignition[1307]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 21 23:35:27.086869 ignition[1307]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 21 23:35:27.090911 ignition[1307]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 21 23:35:27.094617 ignition[1307]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 21 23:35:27.092114 unknown[1307]: wrote ssh authorized keys file for user: core Jan 21 23:35:27.130020 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 21 23:35:27.130020 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 21 23:35:27.223012 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 21 23:35:27.384941 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 21 23:35:27.384941 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 21 23:35:27.384941 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 21 23:35:27.384941 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 21 23:35:27.384941 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 21 23:35:27.408389 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 21 23:35:27.408389 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 21 23:35:27.408389 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 21 23:35:27.408389 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 21 23:35:27.408389 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 21 23:35:27.408389 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 21 23:35:27.408389 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 21 23:35:27.408389 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 21 23:35:27.408389 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 21 23:35:27.408389 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 21 23:35:27.751588 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 21 23:35:28.178504 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 21 23:35:28.178504 ignition[1307]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 21 23:35:28.219466 ignition[1307]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 21 23:35:28.226761 ignition[1307]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 21 23:35:28.226761 ignition[1307]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 21 23:35:28.226761 ignition[1307]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 21 23:35:28.238701 ignition[1307]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 21 23:35:28.238701 ignition[1307]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 21 23:35:28.238701 ignition[1307]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 21 23:35:28.238701 ignition[1307]: INFO : files: files passed Jan 21 23:35:28.238701 ignition[1307]: INFO : Ignition finished successfully Jan 21 23:35:28.260156 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 21 23:35:28.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.269257 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 21 23:35:28.281618 kernel: audit: type=1130 audit(1769038528.264:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.284301 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 21 23:35:28.301890 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 21 23:35:28.305545 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 21 23:35:28.321442 kernel: audit: type=1130 audit(1769038528.307:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.321498 kernel: audit: type=1131 audit(1769038528.308:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.344853 initrd-setup-root-after-ignition[1339]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 21 23:35:28.344853 initrd-setup-root-after-ignition[1339]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 21 23:35:28.354196 initrd-setup-root-after-ignition[1343]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 21 23:35:28.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.357713 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 21 23:35:28.365574 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 21 23:35:28.375652 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 21 23:35:28.454351 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 21 23:35:28.458398 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 21 23:35:28.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.466112 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 21 23:35:28.468895 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 21 23:35:28.472197 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 21 23:35:28.473906 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 21 23:35:28.538019 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 21 23:35:28.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.540190 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 21 23:35:28.579779 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 21 23:35:28.580106 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 21 23:35:28.584377 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 21 23:35:28.594037 systemd[1]: Stopped target timers.target - Timer Units. Jan 21 23:35:28.596578 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 21 23:35:28.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.596843 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 21 23:35:28.613076 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 21 23:35:28.619661 systemd[1]: Stopped target basic.target - Basic System. Jan 21 23:35:28.622675 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 21 23:35:28.631493 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 21 23:35:28.635156 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 21 23:35:28.644964 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 21 23:35:28.648906 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 21 23:35:28.658331 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 21 23:35:28.665968 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 21 23:35:28.669590 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 21 23:35:28.677378 systemd[1]: Stopped target swap.target - Swaps. Jan 21 23:35:28.682933 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 21 23:35:28.683443 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 21 23:35:28.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.692290 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 21 23:35:28.696963 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 21 23:35:28.706828 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 21 23:35:28.714140 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 21 23:35:28.718560 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 21 23:35:28.718875 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 21 23:35:28.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.731838 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 21 23:35:28.732420 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 21 23:35:28.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.742521 systemd[1]: ignition-files.service: Deactivated successfully. Jan 21 23:35:28.742968 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 21 23:35:28.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.753244 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 21 23:35:28.764386 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 21 23:35:28.769659 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 21 23:35:28.770126 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 21 23:35:28.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.786349 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 21 23:35:28.789400 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 21 23:35:28.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.799349 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 21 23:35:28.806097 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 21 23:35:28.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.822260 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 21 23:35:28.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.822494 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 21 23:35:28.847330 ignition[1363]: INFO : Ignition 2.22.0 Jan 21 23:35:28.847330 ignition[1363]: INFO : Stage: umount Jan 21 23:35:28.855272 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 21 23:35:28.855272 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 21 23:35:28.855272 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 21 23:35:28.855272 ignition[1363]: INFO : PUT result: OK Jan 21 23:35:28.873491 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 21 23:35:28.882090 ignition[1363]: INFO : umount: umount passed Jan 21 23:35:28.885139 ignition[1363]: INFO : Ignition finished successfully Jan 21 23:35:28.889752 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 21 23:35:28.892501 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 21 23:35:28.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.899589 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 21 23:35:28.900055 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 21 23:35:28.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.908029 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 21 23:35:28.908323 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 21 23:35:28.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.916492 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 21 23:35:28.917148 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 21 23:35:28.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.927157 systemd[1]: Stopped target network.target - Network. Jan 21 23:35:28.932123 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 21 23:35:28.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.932269 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 21 23:35:28.936195 systemd[1]: Stopped target paths.target - Path Units. Jan 21 23:35:28.943676 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 21 23:35:28.948692 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 21 23:35:28.955825 systemd[1]: Stopped target slices.target - Slice Units. Jan 21 23:35:28.958521 systemd[1]: Stopped target sockets.target - Socket Units. Jan 21 23:35:28.965701 systemd[1]: iscsid.socket: Deactivated successfully. Jan 21 23:35:28.965812 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 21 23:35:28.968909 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 21 23:35:28.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:29.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:29.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.969022 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 21 23:35:28.973873 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 21 23:35:28.973939 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 21 23:35:28.977531 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 21 23:35:28.977649 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 21 23:35:28.986285 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 21 23:35:28.986405 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 21 23:35:28.989847 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 21 23:35:29.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:28.993251 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 21 23:35:28.997199 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 21 23:35:28.999038 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 21 23:35:29.002717 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 21 23:35:29.002911 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 21 23:35:29.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:29.020560 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 21 23:35:29.020791 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 21 23:35:29.069000 audit: BPF prog-id=6 op=UNLOAD Jan 21 23:35:29.044791 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 21 23:35:29.045069 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 21 23:35:29.069491 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 21 23:35:29.081307 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 21 23:35:29.081411 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 21 23:35:29.094335 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 21 23:35:29.099000 audit: BPF prog-id=9 op=UNLOAD Jan 21 23:35:29.104207 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 21 23:35:29.107408 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 21 23:35:29.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:29.116531 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 21 23:35:29.119822 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 21 23:35:29.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:29.126576 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 21 23:35:29.126835 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 21 23:35:29.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:29.139087 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 21 23:35:29.162684 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 21 23:35:29.164334 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 21 23:35:29.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:29.180702 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 21 23:35:29.181138 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 21 23:35:29.191324 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 21 23:35:29.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:29.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:29.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:29.191437 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 21 23:35:29.197556 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 21 23:35:29.197677 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 21 23:35:29.200852 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 21 23:35:29.201004 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 21 23:35:29.204082 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 21 23:35:29.204199 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 21 23:35:29.214810 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 21 23:35:29.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:29.218834 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 21 23:35:29.223535 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 21 23:35:29.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:29.242136 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 21 23:35:29.242263 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 21 23:35:29.252936 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 21 23:35:29.253082 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 21 23:35:29.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:29.272552 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 21 23:35:29.272667 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 21 23:35:29.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:29.287338 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 21 23:35:29.287616 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 23:35:29.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:29.298197 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 21 23:35:29.306683 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 21 23:35:29.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:29.327952 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 21 23:35:29.328448 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 21 23:35:29.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:29.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:29.339679 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 21 23:35:29.347118 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 21 23:35:29.388157 systemd[1]: Switching root. Jan 21 23:35:29.440098 systemd-journald[360]: Journal stopped Jan 21 23:35:33.179886 systemd-journald[360]: Received SIGTERM from PID 1 (systemd). Jan 21 23:35:33.180083 kernel: kauditd_printk_skb: 42 callbacks suppressed Jan 21 23:35:33.180146 kernel: audit: type=1335 audit(1769038529.445:82): pid=360 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=kernel comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" nl-mcgrp=1 op=disconnect res=1 Jan 21 23:35:33.180217 kernel: SELinux: policy capability network_peer_controls=1 Jan 21 23:35:33.184099 kernel: SELinux: policy capability open_perms=1 Jan 21 23:35:33.184139 kernel: SELinux: policy capability extended_socket_class=1 Jan 21 23:35:33.184173 kernel: SELinux: policy capability always_check_network=0 Jan 21 23:35:33.184217 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 21 23:35:33.184267 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 21 23:35:33.184299 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 21 23:35:33.184332 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 21 23:35:33.184370 kernel: SELinux: policy capability userspace_initial_context=0 Jan 21 23:35:33.184404 kernel: audit: type=1403 audit(1769038530.397:83): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 21 23:35:33.184437 systemd[1]: Successfully loaded SELinux policy in 230.016ms. Jan 21 23:35:33.184492 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.990ms. Jan 21 23:35:33.184534 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 21 23:35:33.184571 systemd[1]: Detected virtualization amazon. Jan 21 23:35:33.184607 systemd[1]: Detected architecture arm64. Jan 21 23:35:33.184640 systemd[1]: Detected first boot. Jan 21 23:35:33.184674 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 21 23:35:33.184707 kernel: audit: type=1334 audit(1769038530.901:84): prog-id=10 op=LOAD Jan 21 23:35:33.184740 kernel: audit: type=1334 audit(1769038530.902:85): prog-id=10 op=UNLOAD Jan 21 23:35:33.184777 kernel: audit: type=1334 audit(1769038530.902:86): prog-id=11 op=LOAD Jan 21 23:35:33.184813 kernel: audit: type=1334 audit(1769038530.902:87): prog-id=11 op=UNLOAD Jan 21 23:35:33.184842 kernel: NET: Registered PF_VSOCK protocol family Jan 21 23:35:33.184876 zram_generator::config[1406]: No configuration found. Jan 21 23:35:33.184926 systemd[1]: Populated /etc with preset unit settings. Jan 21 23:35:33.184968 kernel: audit: type=1334 audit(1769038532.389:88): prog-id=12 op=LOAD Jan 21 23:35:33.185047 kernel: audit: type=1334 audit(1769038532.389:89): prog-id=3 op=UNLOAD Jan 21 23:35:33.185088 kernel: audit: type=1334 audit(1769038532.389:90): prog-id=13 op=LOAD Jan 21 23:35:33.185129 kernel: audit: type=1334 audit(1769038532.389:91): prog-id=14 op=LOAD Jan 21 23:35:33.185165 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 21 23:35:33.185203 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 21 23:35:33.185239 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 21 23:35:33.185275 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 21 23:35:33.185310 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 21 23:35:33.185358 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 21 23:35:33.185390 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 21 23:35:33.185420 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 21 23:35:33.185453 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 21 23:35:33.185488 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 21 23:35:33.185526 systemd[1]: Created slice user.slice - User and Session Slice. Jan 21 23:35:33.185559 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 21 23:35:33.185594 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 21 23:35:33.185627 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 21 23:35:33.185658 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 21 23:35:33.185689 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 21 23:35:33.185725 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 21 23:35:33.185761 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 21 23:35:33.185814 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 21 23:35:33.185853 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 21 23:35:33.185886 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 21 23:35:33.185918 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 21 23:35:33.185948 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 21 23:35:33.186025 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 21 23:35:33.186071 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 21 23:35:33.186106 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 21 23:35:33.186139 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 21 23:35:33.186171 systemd[1]: Reached target slices.target - Slice Units. Jan 21 23:35:33.186200 systemd[1]: Reached target swap.target - Swaps. Jan 21 23:35:33.186231 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 21 23:35:33.186264 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 21 23:35:33.186298 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 21 23:35:33.186332 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 21 23:35:33.186362 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 21 23:35:33.186396 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 21 23:35:33.186428 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 21 23:35:33.186459 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 21 23:35:33.186488 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 21 23:35:33.186519 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 21 23:35:33.186553 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 21 23:35:33.186588 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 21 23:35:33.186618 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 21 23:35:33.186649 systemd[1]: Mounting media.mount - External Media Directory... Jan 21 23:35:33.186681 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 21 23:35:33.186716 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 21 23:35:33.186746 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 21 23:35:33.186784 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 21 23:35:33.186815 systemd[1]: Reached target machines.target - Containers. Jan 21 23:35:33.187023 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 21 23:35:33.187070 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 23:35:33.187106 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 21 23:35:33.187366 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 21 23:35:33.187425 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 21 23:35:33.187469 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 21 23:35:33.187503 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 21 23:35:33.187540 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 21 23:35:33.187571 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 21 23:35:33.187604 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 21 23:35:33.187643 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 21 23:35:33.187685 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 21 23:35:33.187719 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 21 23:35:33.187751 systemd[1]: Stopped systemd-fsck-usr.service. Jan 21 23:35:33.187787 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 23:35:33.187829 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 21 23:35:33.187865 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 21 23:35:33.187901 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 21 23:35:33.187937 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 21 23:35:33.188010 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 21 23:35:33.188053 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 21 23:35:33.188085 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 21 23:35:33.188124 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 21 23:35:33.188155 systemd[1]: Mounted media.mount - External Media Directory. Jan 21 23:35:33.188185 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 21 23:35:33.188215 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 21 23:35:33.188248 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 21 23:35:33.188278 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 21 23:35:33.188309 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 21 23:35:33.188347 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 21 23:35:33.188383 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 21 23:35:33.188413 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 21 23:35:33.188445 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 21 23:35:33.188479 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 21 23:35:33.188511 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 21 23:35:33.188544 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 21 23:35:33.188577 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 21 23:35:33.188611 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 21 23:35:33.188643 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 21 23:35:33.188675 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 21 23:35:33.188717 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 21 23:35:33.188748 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 21 23:35:33.188781 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 21 23:35:33.188811 kernel: fuse: init (API version 7.41) Jan 21 23:35:33.188847 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 23:35:33.188879 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 23:35:33.188910 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 21 23:35:33.188940 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 21 23:35:33.188971 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 21 23:35:33.191119 kernel: ACPI: bus type drm_connector registered Jan 21 23:35:33.191154 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 21 23:35:33.191251 systemd-journald[1486]: Collecting audit messages is enabled. Jan 21 23:35:33.191303 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 21 23:35:33.191337 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 21 23:35:33.191371 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 21 23:35:33.191401 systemd-journald[1486]: Journal started Jan 21 23:35:33.191453 systemd-journald[1486]: Runtime Journal (/run/log/journal/ec22f76a82c15b962fff2fb29425a4f2) is 8M, max 75.3M, 67.3M free. Jan 21 23:35:32.561000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 21 23:35:32.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:32.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:32.840000 audit: BPF prog-id=14 op=UNLOAD Jan 21 23:35:32.840000 audit: BPF prog-id=13 op=UNLOAD Jan 21 23:35:32.842000 audit: BPF prog-id=15 op=LOAD Jan 21 23:35:32.843000 audit: BPF prog-id=16 op=LOAD Jan 21 23:35:32.843000 audit: BPF prog-id=17 op=LOAD Jan 21 23:35:32.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.166000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 21 23:35:33.166000 audit[1486]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc3c68830 a2=4000 a3=0 items=0 ppid=1 pid=1486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:33.166000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 21 23:35:32.369097 systemd[1]: Queued start job for default target multi-user.target. Jan 21 23:35:32.393542 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 21 23:35:32.394475 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 21 23:35:33.214126 systemd[1]: Started systemd-journald.service - Journal Service. Jan 21 23:35:33.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.207805 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 21 23:35:33.210014 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 21 23:35:33.214187 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 21 23:35:33.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.217112 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 21 23:35:33.235371 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 21 23:35:33.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.248241 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 21 23:35:33.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.256934 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 21 23:35:33.307411 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 21 23:35:33.315132 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 21 23:35:33.327432 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 21 23:35:33.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.331652 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 21 23:35:33.339568 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 21 23:35:33.402012 kernel: loop1: detected capacity change from 0 to 109872 Jan 21 23:35:33.407746 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 21 23:35:33.440555 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 21 23:35:33.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.449074 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 21 23:35:33.486184 systemd-journald[1486]: Time spent on flushing to /var/log/journal/ec22f76a82c15b962fff2fb29425a4f2 is 91.558ms for 1064 entries. Jan 21 23:35:33.486184 systemd-journald[1486]: System Journal (/var/log/journal/ec22f76a82c15b962fff2fb29425a4f2) is 8M, max 588.1M, 580.1M free. Jan 21 23:35:33.593237 systemd-journald[1486]: Received client request to flush runtime journal. Jan 21 23:35:33.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.492952 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 21 23:35:33.501178 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 21 23:35:33.515922 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 21 23:35:33.558607 systemd-tmpfiles[1508]: ACLs are not supported, ignoring. Jan 21 23:35:33.558634 systemd-tmpfiles[1508]: ACLs are not supported, ignoring. Jan 21 23:35:33.585255 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 21 23:35:33.600483 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 21 23:35:33.605856 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 21 23:35:33.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.622420 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 21 23:35:33.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.694303 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 21 23:35:33.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.700000 audit: BPF prog-id=18 op=LOAD Jan 21 23:35:33.701000 audit: BPF prog-id=19 op=LOAD Jan 21 23:35:33.701000 audit: BPF prog-id=20 op=LOAD Jan 21 23:35:33.706297 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 21 23:35:33.713000 audit: BPF prog-id=21 op=LOAD Jan 21 23:35:33.719286 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 21 23:35:33.727518 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 21 23:35:33.765459 kernel: loop2: detected capacity change from 0 to 100192 Jan 21 23:35:33.780477 systemd-tmpfiles[1563]: ACLs are not supported, ignoring. Jan 21 23:35:33.780527 systemd-tmpfiles[1563]: ACLs are not supported, ignoring. Jan 21 23:35:33.782000 audit: BPF prog-id=22 op=LOAD Jan 21 23:35:33.782000 audit: BPF prog-id=23 op=LOAD Jan 21 23:35:33.782000 audit: BPF prog-id=24 op=LOAD Jan 21 23:35:33.787089 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 21 23:35:33.795000 audit: BPF prog-id=25 op=LOAD Jan 21 23:35:33.795000 audit: BPF prog-id=26 op=LOAD Jan 21 23:35:33.795000 audit: BPF prog-id=27 op=LOAD Jan 21 23:35:33.800466 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 21 23:35:33.809388 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 21 23:35:33.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.898348 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 21 23:35:33.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:33.950610 systemd-nsresourced[1568]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 21 23:35:33.962517 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 21 23:35:33.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:34.177019 kernel: loop3: detected capacity change from 0 to 61504 Jan 21 23:35:34.186081 systemd-oomd[1561]: No swap; memory pressure usage will be degraded Jan 21 23:35:34.187243 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 21 23:35:34.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:34.284104 systemd-resolved[1562]: Positive Trust Anchors: Jan 21 23:35:34.284140 systemd-resolved[1562]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 21 23:35:34.284150 systemd-resolved[1562]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 21 23:35:34.284216 systemd-resolved[1562]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 21 23:35:34.300498 systemd-resolved[1562]: Defaulting to hostname 'linux'. Jan 21 23:35:34.305021 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 21 23:35:34.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:34.310472 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 21 23:35:34.509023 kernel: loop4: detected capacity change from 0 to 207008 Jan 21 23:35:34.525203 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 21 23:35:34.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:34.529540 kernel: kauditd_printk_skb: 58 callbacks suppressed Jan 21 23:35:34.529595 kernel: audit: type=1130 audit(1769038534.527:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:34.528000 audit: BPF prog-id=8 op=UNLOAD Jan 21 23:35:34.537371 kernel: audit: type=1334 audit(1769038534.528:149): prog-id=8 op=UNLOAD Jan 21 23:35:34.537464 kernel: audit: type=1334 audit(1769038534.528:150): prog-id=7 op=UNLOAD Jan 21 23:35:34.528000 audit: BPF prog-id=7 op=UNLOAD Jan 21 23:35:34.537322 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 21 23:35:34.539925 kernel: audit: type=1334 audit(1769038534.533:151): prog-id=28 op=LOAD Jan 21 23:35:34.540056 kernel: audit: type=1334 audit(1769038534.533:152): prog-id=29 op=LOAD Jan 21 23:35:34.533000 audit: BPF prog-id=28 op=LOAD Jan 21 23:35:34.533000 audit: BPF prog-id=29 op=LOAD Jan 21 23:35:34.607507 systemd-udevd[1588]: Using default interface naming scheme 'v257'. Jan 21 23:35:34.811543 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 21 23:35:34.821070 kernel: loop5: detected capacity change from 0 to 109872 Jan 21 23:35:34.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:34.826664 kernel: audit: type=1130 audit(1769038534.817:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:34.835194 kernel: audit: type=1334 audit(1769038534.826:154): prog-id=30 op=LOAD Jan 21 23:35:34.826000 audit: BPF prog-id=30 op=LOAD Jan 21 23:35:34.829009 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 21 23:35:34.860349 kernel: loop6: detected capacity change from 0 to 100192 Jan 21 23:35:34.886128 kernel: loop7: detected capacity change from 0 to 61504 Jan 21 23:35:34.913400 kernel: loop1: detected capacity change from 0 to 207008 Jan 21 23:35:34.938923 (sd-merge)[1590]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Jan 21 23:35:34.949507 (sd-merge)[1590]: Merged extensions into '/usr'. Jan 21 23:35:34.981962 (udev-worker)[1596]: Network interface NamePolicy= disabled on kernel command line. Jan 21 23:35:34.992090 systemd[1]: Reload requested from client PID 1507 ('systemd-sysext') (unit systemd-sysext.service)... Jan 21 23:35:34.992128 systemd[1]: Reloading... Jan 21 23:35:35.095592 systemd-networkd[1597]: lo: Link UP Jan 21 23:35:35.096089 systemd-networkd[1597]: lo: Gained carrier Jan 21 23:35:35.193883 systemd-networkd[1597]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 23:35:35.195681 systemd-networkd[1597]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 21 23:35:35.204249 systemd-networkd[1597]: eth0: Link UP Jan 21 23:35:35.204895 systemd-networkd[1597]: eth0: Gained carrier Jan 21 23:35:35.207418 systemd-networkd[1597]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 23:35:35.252222 systemd-networkd[1597]: eth0: DHCPv4 address 172.31.29.34/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 21 23:35:35.269642 zram_generator::config[1651]: No configuration found. Jan 21 23:35:35.938692 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 21 23:35:35.939887 systemd[1]: Reloading finished in 946 ms. Jan 21 23:35:35.975641 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 21 23:35:35.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:35.999397 kernel: audit: type=1130 audit(1769038535.978:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:35.999480 kernel: audit: type=1130 audit(1769038535.988:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:35.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:35.980500 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 21 23:35:36.021324 systemd[1]: Reached target network.target - Network. Jan 21 23:35:36.037904 systemd[1]: Starting ensure-sysext.service... Jan 21 23:35:36.046920 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 21 23:35:36.056607 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 21 23:35:36.067501 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 21 23:35:36.077458 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 23:35:36.083000 audit: BPF prog-id=31 op=LOAD Jan 21 23:35:36.083000 audit: BPF prog-id=18 op=UNLOAD Jan 21 23:35:36.086000 audit: BPF prog-id=32 op=LOAD Jan 21 23:35:36.086000 audit: BPF prog-id=33 op=LOAD Jan 21 23:35:36.086000 audit: BPF prog-id=19 op=UNLOAD Jan 21 23:35:36.086000 audit: BPF prog-id=20 op=UNLOAD Jan 21 23:35:36.090399 kernel: audit: type=1334 audit(1769038536.083:157): prog-id=31 op=LOAD Jan 21 23:35:36.088000 audit: BPF prog-id=34 op=LOAD Jan 21 23:35:36.089000 audit: BPF prog-id=25 op=UNLOAD Jan 21 23:35:36.089000 audit: BPF prog-id=35 op=LOAD Jan 21 23:35:36.089000 audit: BPF prog-id=36 op=LOAD Jan 21 23:35:36.089000 audit: BPF prog-id=26 op=UNLOAD Jan 21 23:35:36.089000 audit: BPF prog-id=27 op=UNLOAD Jan 21 23:35:36.092000 audit: BPF prog-id=37 op=LOAD Jan 21 23:35:36.092000 audit: BPF prog-id=22 op=UNLOAD Jan 21 23:35:36.092000 audit: BPF prog-id=38 op=LOAD Jan 21 23:35:36.092000 audit: BPF prog-id=39 op=LOAD Jan 21 23:35:36.092000 audit: BPF prog-id=23 op=UNLOAD Jan 21 23:35:36.092000 audit: BPF prog-id=24 op=UNLOAD Jan 21 23:35:36.095000 audit: BPF prog-id=40 op=LOAD Jan 21 23:35:36.095000 audit: BPF prog-id=30 op=UNLOAD Jan 21 23:35:36.103000 audit: BPF prog-id=41 op=LOAD Jan 21 23:35:36.103000 audit: BPF prog-id=15 op=UNLOAD Jan 21 23:35:36.103000 audit: BPF prog-id=42 op=LOAD Jan 21 23:35:36.103000 audit: BPF prog-id=43 op=LOAD Jan 21 23:35:36.103000 audit: BPF prog-id=16 op=UNLOAD Jan 21 23:35:36.103000 audit: BPF prog-id=17 op=UNLOAD Jan 21 23:35:36.104000 audit: BPF prog-id=44 op=LOAD Jan 21 23:35:36.104000 audit: BPF prog-id=45 op=LOAD Jan 21 23:35:36.104000 audit: BPF prog-id=28 op=UNLOAD Jan 21 23:35:36.104000 audit: BPF prog-id=29 op=UNLOAD Jan 21 23:35:36.107000 audit: BPF prog-id=46 op=LOAD Jan 21 23:35:36.108000 audit: BPF prog-id=21 op=UNLOAD Jan 21 23:35:36.130405 systemd[1]: Reload requested from client PID 1800 ('systemctl') (unit ensure-sysext.service)... Jan 21 23:35:36.130440 systemd[1]: Reloading... Jan 21 23:35:36.196556 systemd-tmpfiles[1803]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 21 23:35:36.196633 systemd-tmpfiles[1803]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 21 23:35:36.197349 systemd-tmpfiles[1803]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 21 23:35:36.207155 systemd-tmpfiles[1803]: ACLs are not supported, ignoring. Jan 21 23:35:36.207362 systemd-tmpfiles[1803]: ACLs are not supported, ignoring. Jan 21 23:35:36.277967 systemd-tmpfiles[1803]: Detected autofs mount point /boot during canonicalization of boot. Jan 21 23:35:36.280659 systemd-tmpfiles[1803]: Skipping /boot Jan 21 23:35:36.307735 systemd-networkd[1597]: eth0: Gained IPv6LL Jan 21 23:35:36.334219 systemd-tmpfiles[1803]: Detected autofs mount point /boot during canonicalization of boot. Jan 21 23:35:36.334244 systemd-tmpfiles[1803]: Skipping /boot Jan 21 23:35:36.399029 zram_generator::config[1843]: No configuration found. Jan 21 23:35:36.877293 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 21 23:35:36.881731 systemd[1]: Reloading finished in 750 ms. Jan 21 23:35:36.901000 audit: BPF prog-id=47 op=LOAD Jan 21 23:35:36.901000 audit: BPF prog-id=41 op=UNLOAD Jan 21 23:35:36.901000 audit: BPF prog-id=48 op=LOAD Jan 21 23:35:36.901000 audit: BPF prog-id=49 op=LOAD Jan 21 23:35:36.901000 audit: BPF prog-id=42 op=UNLOAD Jan 21 23:35:36.901000 audit: BPF prog-id=43 op=UNLOAD Jan 21 23:35:36.903000 audit: BPF prog-id=50 op=LOAD Jan 21 23:35:36.903000 audit: BPF prog-id=37 op=UNLOAD Jan 21 23:35:36.904000 audit: BPF prog-id=51 op=LOAD Jan 21 23:35:36.904000 audit: BPF prog-id=52 op=LOAD Jan 21 23:35:36.904000 audit: BPF prog-id=38 op=UNLOAD Jan 21 23:35:36.904000 audit: BPF prog-id=39 op=UNLOAD Jan 21 23:35:36.906000 audit: BPF prog-id=53 op=LOAD Jan 21 23:35:36.906000 audit: BPF prog-id=54 op=LOAD Jan 21 23:35:36.906000 audit: BPF prog-id=44 op=UNLOAD Jan 21 23:35:36.906000 audit: BPF prog-id=45 op=UNLOAD Jan 21 23:35:36.908000 audit: BPF prog-id=55 op=LOAD Jan 21 23:35:36.908000 audit: BPF prog-id=46 op=UNLOAD Jan 21 23:35:36.917000 audit: BPF prog-id=56 op=LOAD Jan 21 23:35:36.917000 audit: BPF prog-id=31 op=UNLOAD Jan 21 23:35:36.917000 audit: BPF prog-id=57 op=LOAD Jan 21 23:35:36.918000 audit: BPF prog-id=58 op=LOAD Jan 21 23:35:36.918000 audit: BPF prog-id=32 op=UNLOAD Jan 21 23:35:36.918000 audit: BPF prog-id=33 op=UNLOAD Jan 21 23:35:36.919000 audit: BPF prog-id=59 op=LOAD Jan 21 23:35:36.919000 audit: BPF prog-id=40 op=UNLOAD Jan 21 23:35:36.921000 audit: BPF prog-id=60 op=LOAD Jan 21 23:35:36.921000 audit: BPF prog-id=34 op=UNLOAD Jan 21 23:35:36.922000 audit: BPF prog-id=61 op=LOAD Jan 21 23:35:36.922000 audit: BPF prog-id=62 op=LOAD Jan 21 23:35:36.922000 audit: BPF prog-id=35 op=UNLOAD Jan 21 23:35:36.922000 audit: BPF prog-id=36 op=UNLOAD Jan 21 23:35:36.929182 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 21 23:35:36.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:36.933385 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 21 23:35:36.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:36.937523 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 21 23:35:36.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:36.944471 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 23:35:36.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:37.016664 systemd[1]: Reached target network-online.target - Network is Online. Jan 21 23:35:37.022379 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 21 23:35:37.033421 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 21 23:35:37.037064 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 23:35:37.043732 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 21 23:35:37.054625 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 21 23:35:37.062483 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 21 23:35:37.066163 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 23:35:37.066905 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 23:35:37.073241 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 21 23:35:37.087555 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 21 23:35:37.091287 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 23:35:37.100185 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 21 23:35:37.113484 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 21 23:35:37.125302 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 21 23:35:37.127132 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 21 23:35:37.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:37.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:37.145463 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 23:35:37.156869 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 21 23:35:37.161562 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 23:35:37.162131 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 23:35:37.162421 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 23:35:37.177325 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 23:35:37.195896 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 21 23:35:37.199363 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 23:35:37.199789 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 23:35:37.200076 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 23:35:37.200429 systemd[1]: Reached target time-set.target - System Time Set. Jan 21 23:35:37.216871 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 21 23:35:37.222547 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 21 23:35:37.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:37.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:37.231288 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 21 23:35:37.231847 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 21 23:35:37.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:37.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:37.245713 systemd[1]: Finished ensure-sysext.service. Jan 21 23:35:37.246000 audit[1915]: SYSTEM_BOOT pid=1915 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 21 23:35:37.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:37.250613 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 21 23:35:37.251281 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 21 23:35:37.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:37.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:37.258440 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 21 23:35:37.259565 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 21 23:35:37.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:37.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:37.283772 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 21 23:35:37.284216 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 21 23:35:37.294328 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 21 23:35:37.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:37.312033 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 21 23:35:37.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:37.317622 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 21 23:35:37.328716 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 21 23:35:37.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:37.334361 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 21 23:35:37.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:37.395536 augenrules[1942]: No rules Jan 21 23:35:37.394000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 21 23:35:37.394000 audit[1942]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc0e15920 a2=420 a3=0 items=0 ppid=1901 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:37.394000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 21 23:35:37.400179 systemd[1]: audit-rules.service: Deactivated successfully. Jan 21 23:35:37.400745 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 21 23:35:40.447759 ldconfig[1907]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 21 23:35:40.459129 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 21 23:35:40.466249 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 21 23:35:40.510147 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 21 23:35:40.513890 systemd[1]: Reached target sysinit.target - System Initialization. Jan 21 23:35:40.517055 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 21 23:35:40.521065 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 21 23:35:40.525187 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 21 23:35:40.528400 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 21 23:35:40.532064 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 21 23:35:40.536501 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 21 23:35:40.539735 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 21 23:35:40.543184 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 21 23:35:40.543257 systemd[1]: Reached target paths.target - Path Units. Jan 21 23:35:40.546001 systemd[1]: Reached target timers.target - Timer Units. Jan 21 23:35:40.550784 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 21 23:35:40.557629 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 21 23:35:40.566346 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 21 23:35:40.570817 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 21 23:35:40.574613 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 21 23:35:40.590359 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 21 23:35:40.593954 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 21 23:35:40.598364 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 21 23:35:40.601862 systemd[1]: Reached target sockets.target - Socket Units. Jan 21 23:35:40.604688 systemd[1]: Reached target basic.target - Basic System. Jan 21 23:35:40.607624 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 21 23:35:40.607685 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 21 23:35:40.610289 systemd[1]: Starting containerd.service - containerd container runtime... Jan 21 23:35:40.618153 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 21 23:35:40.624424 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 21 23:35:40.634501 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 21 23:35:40.647673 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 21 23:35:40.659415 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 21 23:35:40.659637 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 21 23:35:40.667319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 23:35:40.671431 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 21 23:35:40.693699 systemd[1]: Started ntpd.service - Network Time Service. Jan 21 23:35:40.702604 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 21 23:35:40.712770 jq[1957]: false Jan 21 23:35:40.713724 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 21 23:35:40.730350 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 21 23:35:40.743482 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 21 23:35:40.763500 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 21 23:35:40.777055 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 21 23:35:40.781936 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 21 23:35:40.782899 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 21 23:35:40.786419 systemd[1]: Starting update-engine.service - Update Engine... Jan 21 23:35:40.800419 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 21 23:35:40.828349 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 21 23:35:40.834922 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 21 23:35:40.835586 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 21 23:35:40.888720 extend-filesystems[1958]: Found /dev/nvme0n1p6 Jan 21 23:35:40.903940 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 21 23:35:40.905629 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 21 23:35:40.919136 systemd[1]: motdgen.service: Deactivated successfully. Jan 21 23:35:40.919878 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 21 23:35:40.943176 extend-filesystems[1958]: Found /dev/nvme0n1p9 Jan 21 23:35:40.947129 jq[1973]: true Jan 21 23:35:40.955503 extend-filesystems[1958]: Checking size of /dev/nvme0n1p9 Jan 21 23:35:40.972694 ntpd[1961]: ntpd 4.2.8p18@1.4062-o Wed Jan 21 21:31:46 UTC 2026 (1): Starting Jan 21 23:35:40.973393 ntpd[1961]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 21 23:35:40.974590 ntpd[1961]: 21 Jan 23:35:40 ntpd[1961]: ntpd 4.2.8p18@1.4062-o Wed Jan 21 21:31:46 UTC 2026 (1): Starting Jan 21 23:35:40.974590 ntpd[1961]: 21 Jan 23:35:40 ntpd[1961]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 21 23:35:40.974590 ntpd[1961]: 21 Jan 23:35:40 ntpd[1961]: ---------------------------------------------------- Jan 21 23:35:40.974590 ntpd[1961]: 21 Jan 23:35:40 ntpd[1961]: ntp-4 is maintained by Network Time Foundation, Jan 21 23:35:40.974590 ntpd[1961]: 21 Jan 23:35:40 ntpd[1961]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 21 23:35:40.974590 ntpd[1961]: 21 Jan 23:35:40 ntpd[1961]: corporation. Support and training for ntp-4 are Jan 21 23:35:40.974590 ntpd[1961]: 21 Jan 23:35:40 ntpd[1961]: available at https://www.nwtime.org/support Jan 21 23:35:40.974590 ntpd[1961]: 21 Jan 23:35:40 ntpd[1961]: ---------------------------------------------------- Jan 21 23:35:40.973420 ntpd[1961]: ---------------------------------------------------- Jan 21 23:35:40.973439 ntpd[1961]: ntp-4 is maintained by Network Time Foundation, Jan 21 23:35:40.973462 ntpd[1961]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 21 23:35:40.973480 ntpd[1961]: corporation. Support and training for ntp-4 are Jan 21 23:35:40.973498 ntpd[1961]: available at https://www.nwtime.org/support Jan 21 23:35:40.996269 ntpd[1961]: 21 Jan 23:35:40 ntpd[1961]: proto: precision = 0.096 usec (-23) Jan 21 23:35:40.996269 ntpd[1961]: 21 Jan 23:35:40 ntpd[1961]: basedate set to 2026-01-09 Jan 21 23:35:40.996269 ntpd[1961]: 21 Jan 23:35:40 ntpd[1961]: gps base set to 2026-01-11 (week 2401) Jan 21 23:35:40.996269 ntpd[1961]: 21 Jan 23:35:40 ntpd[1961]: Listen and drop on 0 v6wildcard [::]:123 Jan 21 23:35:40.996269 ntpd[1961]: 21 Jan 23:35:40 ntpd[1961]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 21 23:35:40.996269 ntpd[1961]: 21 Jan 23:35:40 ntpd[1961]: Listen normally on 2 lo 127.0.0.1:123 Jan 21 23:35:40.996269 ntpd[1961]: 21 Jan 23:35:40 ntpd[1961]: Listen normally on 3 eth0 172.31.29.34:123 Jan 21 23:35:40.996269 ntpd[1961]: 21 Jan 23:35:40 ntpd[1961]: Listen normally on 4 lo [::1]:123 Jan 21 23:35:40.996269 ntpd[1961]: 21 Jan 23:35:40 ntpd[1961]: Listen normally on 5 eth0 [fe80::4a1:29ff:fe9b:69df%2]:123 Jan 21 23:35:40.996269 ntpd[1961]: 21 Jan 23:35:40 ntpd[1961]: Listening on routing socket on fd #22 for interface updates Jan 21 23:35:40.973515 ntpd[1961]: ---------------------------------------------------- Jan 21 23:35:40.984811 ntpd[1961]: proto: precision = 0.096 usec (-23) Jan 21 23:35:40.986554 ntpd[1961]: basedate set to 2026-01-09 Jan 21 23:35:40.986591 ntpd[1961]: gps base set to 2026-01-11 (week 2401) Jan 21 23:35:40.986792 ntpd[1961]: Listen and drop on 0 v6wildcard [::]:123 Jan 21 23:35:40.986845 ntpd[1961]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 21 23:35:40.989320 ntpd[1961]: Listen normally on 2 lo 127.0.0.1:123 Jan 21 23:35:40.989382 ntpd[1961]: Listen normally on 3 eth0 172.31.29.34:123 Jan 21 23:35:40.989441 ntpd[1961]: Listen normally on 4 lo [::1]:123 Jan 21 23:35:40.989490 ntpd[1961]: Listen normally on 5 eth0 [fe80::4a1:29ff:fe9b:69df%2]:123 Jan 21 23:35:40.989537 ntpd[1961]: Listening on routing socket on fd #22 for interface updates Jan 21 23:35:41.010192 tar[1982]: linux-arm64/LICENSE Jan 21 23:35:41.018759 tar[1982]: linux-arm64/helm Jan 21 23:35:41.029541 ntpd[1961]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 21 23:35:41.032828 ntpd[1961]: 21 Jan 23:35:41 ntpd[1961]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 21 23:35:41.032828 ntpd[1961]: 21 Jan 23:35:41 ntpd[1961]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 21 23:35:41.029608 ntpd[1961]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 21 23:35:41.043555 dbus-daemon[1955]: [system] SELinux support is enabled Jan 21 23:35:41.044081 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 21 23:35:41.051638 dbus-daemon[1955]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1597 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 21 23:35:41.055867 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 21 23:35:41.068353 dbus-daemon[1955]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 21 23:35:41.055938 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 21 23:35:41.057355 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 21 23:35:41.057392 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 21 23:35:41.081316 extend-filesystems[1958]: Resized partition /dev/nvme0n1p9 Jan 21 23:35:41.095264 extend-filesystems[2022]: resize2fs 1.47.3 (8-Jul-2025) Jan 21 23:35:41.107953 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Jan 21 23:35:41.100472 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 21 23:35:41.123025 update_engine[1972]: I20260121 23:35:41.119498 1972 main.cc:92] Flatcar Update Engine starting Jan 21 23:35:41.138349 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Jan 21 23:35:41.155205 extend-filesystems[2022]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 21 23:35:41.155205 extend-filesystems[2022]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 21 23:35:41.155205 extend-filesystems[2022]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Jan 21 23:35:41.190680 extend-filesystems[1958]: Resized filesystem in /dev/nvme0n1p9 Jan 21 23:35:41.196343 jq[2009]: true Jan 21 23:35:41.156154 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 21 23:35:41.196866 update_engine[1972]: I20260121 23:35:41.164306 1972 update_check_scheduler.cc:74] Next update check in 5m31s Jan 21 23:35:41.174494 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 21 23:35:41.177030 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 21 23:35:41.211372 systemd[1]: Started update-engine.service - Update Engine. Jan 21 23:35:41.234599 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 21 23:35:41.310040 coreos-metadata[1954]: Jan 21 23:35:41.309 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 21 23:35:41.320934 coreos-metadata[1954]: Jan 21 23:35:41.320 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 21 23:35:41.320715 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 21 23:35:41.329354 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 21 23:35:41.346083 coreos-metadata[1954]: Jan 21 23:35:41.346 INFO Fetch successful Jan 21 23:35:41.346083 coreos-metadata[1954]: Jan 21 23:35:41.346 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 21 23:35:41.346600 coreos-metadata[1954]: Jan 21 23:35:41.346 INFO Fetch successful Jan 21 23:35:41.346600 coreos-metadata[1954]: Jan 21 23:35:41.346 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 21 23:35:41.355373 coreos-metadata[1954]: Jan 21 23:35:41.355 INFO Fetch successful Jan 21 23:35:41.355373 coreos-metadata[1954]: Jan 21 23:35:41.355 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 21 23:35:41.359740 coreos-metadata[1954]: Jan 21 23:35:41.359 INFO Fetch successful Jan 21 23:35:41.359740 coreos-metadata[1954]: Jan 21 23:35:41.359 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 21 23:35:41.368137 coreos-metadata[1954]: Jan 21 23:35:41.368 INFO Fetch failed with 404: resource not found Jan 21 23:35:41.368137 coreos-metadata[1954]: Jan 21 23:35:41.368 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 21 23:35:41.369882 coreos-metadata[1954]: Jan 21 23:35:41.369 INFO Fetch successful Jan 21 23:35:41.369882 coreos-metadata[1954]: Jan 21 23:35:41.369 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 21 23:35:41.378033 coreos-metadata[1954]: Jan 21 23:35:41.376 INFO Fetch successful Jan 21 23:35:41.378033 coreos-metadata[1954]: Jan 21 23:35:41.376 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 21 23:35:41.381897 coreos-metadata[1954]: Jan 21 23:35:41.381 INFO Fetch successful Jan 21 23:35:41.381897 coreos-metadata[1954]: Jan 21 23:35:41.381 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 21 23:35:41.387374 coreos-metadata[1954]: Jan 21 23:35:41.387 INFO Fetch successful Jan 21 23:35:41.387374 coreos-metadata[1954]: Jan 21 23:35:41.387 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 21 23:35:41.403782 coreos-metadata[1954]: Jan 21 23:35:41.401 INFO Fetch successful Jan 21 23:35:41.459512 systemd-logind[1971]: Watching system buttons on /dev/input/event0 (Power Button) Jan 21 23:35:41.460182 systemd-logind[1971]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 21 23:35:41.460803 systemd-logind[1971]: New seat seat0. Jan 21 23:35:41.462707 systemd[1]: Started systemd-logind.service - User Login Management. Jan 21 23:35:41.540383 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 21 23:35:41.546770 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 21 23:35:41.647636 bash[2061]: Updated "/home/core/.ssh/authorized_keys" Jan 21 23:35:41.663656 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 21 23:35:41.676775 systemd[1]: Starting sshkeys.service... Jan 21 23:35:41.761158 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 21 23:35:41.764962 dbus-daemon[1955]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 21 23:35:41.777212 dbus-daemon[1955]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2023 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 21 23:35:41.791329 systemd[1]: Starting polkit.service - Authorization Manager... Jan 21 23:35:41.845581 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 21 23:35:41.867129 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 21 23:35:41.962807 amazon-ssm-agent[2036]: Initializing new seelog logger Jan 21 23:35:41.970546 amazon-ssm-agent[2036]: New Seelog Logger Creation Complete Jan 21 23:35:41.970546 amazon-ssm-agent[2036]: 2026/01/21 23:35:41 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 23:35:41.970546 amazon-ssm-agent[2036]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 23:35:41.970930 amazon-ssm-agent[2036]: 2026/01/21 23:35:41 processing appconfig overrides Jan 21 23:35:41.981518 amazon-ssm-agent[2036]: 2026/01/21 23:35:41 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 23:35:41.981518 amazon-ssm-agent[2036]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 23:35:41.981794 amazon-ssm-agent[2036]: 2026/01/21 23:35:41 processing appconfig overrides Jan 21 23:35:41.987358 amazon-ssm-agent[2036]: 2026/01/21 23:35:41 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 23:35:41.987358 amazon-ssm-agent[2036]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 23:35:41.987358 amazon-ssm-agent[2036]: 2026/01/21 23:35:41 processing appconfig overrides Jan 21 23:35:41.988209 amazon-ssm-agent[2036]: 2026-01-21 23:35:41.9813 INFO Proxy environment variables: Jan 21 23:35:42.010930 amazon-ssm-agent[2036]: 2026/01/21 23:35:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 23:35:42.010930 amazon-ssm-agent[2036]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 23:35:42.010930 amazon-ssm-agent[2036]: 2026/01/21 23:35:42 processing appconfig overrides Jan 21 23:35:42.090118 amazon-ssm-agent[2036]: 2026-01-21 23:35:41.9813 INFO https_proxy: Jan 21 23:35:42.104532 locksmithd[2033]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 21 23:35:42.108229 coreos-metadata[2083]: Jan 21 23:35:42.107 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 21 23:35:42.120188 coreos-metadata[2083]: Jan 21 23:35:42.117 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 21 23:35:42.120188 coreos-metadata[2083]: Jan 21 23:35:42.119 INFO Fetch successful Jan 21 23:35:42.120188 coreos-metadata[2083]: Jan 21 23:35:42.119 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 21 23:35:42.131171 coreos-metadata[2083]: Jan 21 23:35:42.126 INFO Fetch successful Jan 21 23:35:42.137709 unknown[2083]: wrote ssh authorized keys file for user: core Jan 21 23:35:42.166124 containerd[2003]: time="2026-01-21T23:35:42Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 21 23:35:42.166124 containerd[2003]: time="2026-01-21T23:35:42.159528344Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 21 23:35:42.194046 amazon-ssm-agent[2036]: 2026-01-21 23:35:41.9813 INFO http_proxy: Jan 21 23:35:42.213023 update-ssh-keys[2114]: Updated "/home/core/.ssh/authorized_keys" Jan 21 23:35:42.215149 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 21 23:35:42.232883 systemd[1]: Finished sshkeys.service. Jan 21 23:35:42.252245 containerd[2003]: time="2026-01-21T23:35:42.250118948Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="24.096µs" Jan 21 23:35:42.252245 containerd[2003]: time="2026-01-21T23:35:42.250231844Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 21 23:35:42.252245 containerd[2003]: time="2026-01-21T23:35:42.250360988Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 21 23:35:42.252245 containerd[2003]: time="2026-01-21T23:35:42.250397636Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 21 23:35:42.252245 containerd[2003]: time="2026-01-21T23:35:42.251032364Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 21 23:35:42.252245 containerd[2003]: time="2026-01-21T23:35:42.251132084Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 21 23:35:42.252580 containerd[2003]: time="2026-01-21T23:35:42.252341132Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 21 23:35:42.252580 containerd[2003]: time="2026-01-21T23:35:42.252411392Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 21 23:35:42.255728 containerd[2003]: time="2026-01-21T23:35:42.255492152Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 21 23:35:42.256309 containerd[2003]: time="2026-01-21T23:35:42.255701924Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 21 23:35:42.259207 containerd[2003]: time="2026-01-21T23:35:42.256263884Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 21 23:35:42.259207 containerd[2003]: time="2026-01-21T23:35:42.256851644Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 21 23:35:42.261698 containerd[2003]: time="2026-01-21T23:35:42.259834004Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 21 23:35:42.261698 containerd[2003]: time="2026-01-21T23:35:42.259897220Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 21 23:35:42.263207 containerd[2003]: time="2026-01-21T23:35:42.262702892Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 21 23:35:42.263326 containerd[2003]: time="2026-01-21T23:35:42.263280020Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 21 23:35:42.264087 containerd[2003]: time="2026-01-21T23:35:42.263357768Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 21 23:35:42.264087 containerd[2003]: time="2026-01-21T23:35:42.263419820Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 21 23:35:42.264087 containerd[2003]: time="2026-01-21T23:35:42.263500676Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 21 23:35:42.264087 containerd[2003]: time="2026-01-21T23:35:42.263936624Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 21 23:35:42.273632 containerd[2003]: time="2026-01-21T23:35:42.273450332Z" level=info msg="metadata content store policy set" policy=shared Jan 21 23:35:42.286575 containerd[2003]: time="2026-01-21T23:35:42.286491321Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 21 23:35:42.288022 containerd[2003]: time="2026-01-21T23:35:42.286794957Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 21 23:35:42.288022 containerd[2003]: time="2026-01-21T23:35:42.286961649Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 21 23:35:42.288022 containerd[2003]: time="2026-01-21T23:35:42.287012025Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 21 23:35:42.288022 containerd[2003]: time="2026-01-21T23:35:42.287043765Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 21 23:35:42.288022 containerd[2003]: time="2026-01-21T23:35:42.287072517Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 21 23:35:42.288022 containerd[2003]: time="2026-01-21T23:35:42.287106141Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 21 23:35:42.288022 containerd[2003]: time="2026-01-21T23:35:42.287144949Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 21 23:35:42.288022 containerd[2003]: time="2026-01-21T23:35:42.287174817Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 21 23:35:42.288022 containerd[2003]: time="2026-01-21T23:35:42.287203197Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 21 23:35:42.288022 containerd[2003]: time="2026-01-21T23:35:42.287229093Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 21 23:35:42.288022 containerd[2003]: time="2026-01-21T23:35:42.287255457Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 21 23:35:42.288022 containerd[2003]: time="2026-01-21T23:35:42.287278989Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 21 23:35:42.288022 containerd[2003]: time="2026-01-21T23:35:42.287315925Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 21 23:35:42.288717 containerd[2003]: time="2026-01-21T23:35:42.287563281Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 21 23:35:42.288717 containerd[2003]: time="2026-01-21T23:35:42.287610549Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 21 23:35:42.288717 containerd[2003]: time="2026-01-21T23:35:42.287644977Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 21 23:35:42.288717 containerd[2003]: time="2026-01-21T23:35:42.287673249Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 21 23:35:42.288717 containerd[2003]: time="2026-01-21T23:35:42.287700825Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 21 23:35:42.288717 containerd[2003]: time="2026-01-21T23:35:42.287732013Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 21 23:35:42.288717 containerd[2003]: time="2026-01-21T23:35:42.287763033Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 21 23:35:42.288717 containerd[2003]: time="2026-01-21T23:35:42.287790201Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 21 23:35:42.288717 containerd[2003]: time="2026-01-21T23:35:42.287817453Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 21 23:35:42.288717 containerd[2003]: time="2026-01-21T23:35:42.287843541Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 21 23:35:42.288717 containerd[2003]: time="2026-01-21T23:35:42.287872665Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 21 23:35:42.288717 containerd[2003]: time="2026-01-21T23:35:42.287940441Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 21 23:35:42.298021 containerd[2003]: time="2026-01-21T23:35:42.296450829Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 21 23:35:42.298021 containerd[2003]: time="2026-01-21T23:35:42.296543529Z" level=info msg="Start snapshots syncer" Jan 21 23:35:42.298021 containerd[2003]: time="2026-01-21T23:35:42.296617917Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 21 23:35:42.301046 containerd[2003]: time="2026-01-21T23:35:42.300146337Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 21 23:35:42.301506 amazon-ssm-agent[2036]: 2026-01-21 23:35:41.9813 INFO no_proxy: Jan 21 23:35:42.304013 containerd[2003]: time="2026-01-21T23:35:42.303061329Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 21 23:35:42.307554 containerd[2003]: time="2026-01-21T23:35:42.307479837Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 21 23:35:42.311024 containerd[2003]: time="2026-01-21T23:35:42.309316377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 21 23:35:42.311024 containerd[2003]: time="2026-01-21T23:35:42.309402009Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 21 23:35:42.311024 containerd[2003]: time="2026-01-21T23:35:42.309432513Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 21 23:35:42.311024 containerd[2003]: time="2026-01-21T23:35:42.309463017Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 21 23:35:42.311024 containerd[2003]: time="2026-01-21T23:35:42.309493833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 21 23:35:42.311024 containerd[2003]: time="2026-01-21T23:35:42.309536853Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 21 23:35:42.311024 containerd[2003]: time="2026-01-21T23:35:42.309568785Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 21 23:35:42.311024 containerd[2003]: time="2026-01-21T23:35:42.309598113Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 21 23:35:42.311024 containerd[2003]: time="2026-01-21T23:35:42.309628365Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 21 23:35:42.311024 containerd[2003]: time="2026-01-21T23:35:42.309694617Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 21 23:35:42.311024 containerd[2003]: time="2026-01-21T23:35:42.309728205Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 21 23:35:42.311024 containerd[2003]: time="2026-01-21T23:35:42.309751677Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 21 23:35:42.311024 containerd[2003]: time="2026-01-21T23:35:42.309801609Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 21 23:35:42.311024 containerd[2003]: time="2026-01-21T23:35:42.309829257Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 21 23:35:42.311710 containerd[2003]: time="2026-01-21T23:35:42.309863337Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 21 23:35:42.311710 containerd[2003]: time="2026-01-21T23:35:42.309891261Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 21 23:35:42.311710 containerd[2003]: time="2026-01-21T23:35:42.309919257Z" level=info msg="runtime interface created" Jan 21 23:35:42.311710 containerd[2003]: time="2026-01-21T23:35:42.309934905Z" level=info msg="created NRI interface" Jan 21 23:35:42.311710 containerd[2003]: time="2026-01-21T23:35:42.309956289Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 21 23:35:42.311710 containerd[2003]: time="2026-01-21T23:35:42.310055025Z" level=info msg="Connect containerd service" Jan 21 23:35:42.311710 containerd[2003]: time="2026-01-21T23:35:42.310119381Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 21 23:35:42.319011 containerd[2003]: time="2026-01-21T23:35:42.315954249Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 21 23:35:42.406127 amazon-ssm-agent[2036]: 2026-01-21 23:35:41.9817 INFO Checking if agent identity type OnPrem can be assumed Jan 21 23:35:42.503967 amazon-ssm-agent[2036]: 2026-01-21 23:35:41.9818 INFO Checking if agent identity type EC2 can be assumed Jan 21 23:35:42.616014 amazon-ssm-agent[2036]: 2026-01-21 23:35:42.4912 INFO Agent will take identity from EC2 Jan 21 23:35:42.625190 polkitd[2080]: Started polkitd version 126 Jan 21 23:35:42.669369 polkitd[2080]: Loading rules from directory /etc/polkit-1/rules.d Jan 21 23:35:42.678717 polkitd[2080]: Loading rules from directory /run/polkit-1/rules.d Jan 21 23:35:42.678863 polkitd[2080]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 21 23:35:42.679914 polkitd[2080]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 21 23:35:42.680079 polkitd[2080]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 21 23:35:42.680185 polkitd[2080]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 21 23:35:42.690525 polkitd[2080]: Finished loading, compiling and executing 2 rules Jan 21 23:35:42.694471 systemd[1]: Started polkit.service - Authorization Manager. Jan 21 23:35:42.706440 dbus-daemon[1955]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 21 23:35:42.715031 amazon-ssm-agent[2036]: 2026-01-21 23:35:42.5042 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 21 23:35:42.715099 polkitd[2080]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 21 23:35:42.783097 systemd-hostnamed[2023]: Hostname set to (transient) Jan 21 23:35:42.783146 systemd-resolved[1562]: System hostname changed to 'ip-172-31-29-34'. Jan 21 23:35:42.817935 amazon-ssm-agent[2036]: 2026-01-21 23:35:42.5042 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 21 23:35:42.910101 containerd[2003]: time="2026-01-21T23:35:42.906851256Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 21 23:35:42.910101 containerd[2003]: time="2026-01-21T23:35:42.907021716Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 21 23:35:42.910101 containerd[2003]: time="2026-01-21T23:35:42.907077444Z" level=info msg="Start subscribing containerd event" Jan 21 23:35:42.910101 containerd[2003]: time="2026-01-21T23:35:42.907142772Z" level=info msg="Start recovering state" Jan 21 23:35:42.910101 containerd[2003]: time="2026-01-21T23:35:42.907283808Z" level=info msg="Start event monitor" Jan 21 23:35:42.910101 containerd[2003]: time="2026-01-21T23:35:42.907310124Z" level=info msg="Start cni network conf syncer for default" Jan 21 23:35:42.910101 containerd[2003]: time="2026-01-21T23:35:42.907333872Z" level=info msg="Start streaming server" Jan 21 23:35:42.910101 containerd[2003]: time="2026-01-21T23:35:42.907356612Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 21 23:35:42.910101 containerd[2003]: time="2026-01-21T23:35:42.907374072Z" level=info msg="runtime interface starting up..." Jan 21 23:35:42.910101 containerd[2003]: time="2026-01-21T23:35:42.907388952Z" level=info msg="starting plugins..." Jan 21 23:35:42.910101 containerd[2003]: time="2026-01-21T23:35:42.907421880Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 21 23:35:42.910101 containerd[2003]: time="2026-01-21T23:35:42.907640004Z" level=info msg="containerd successfully booted in 0.749981s" Jan 21 23:35:42.909878 systemd[1]: Started containerd.service - containerd container runtime. Jan 21 23:35:42.920096 amazon-ssm-agent[2036]: 2026-01-21 23:35:42.5042 INFO [amazon-ssm-agent] Starting Core Agent Jan 21 23:35:43.022413 amazon-ssm-agent[2036]: 2026-01-21 23:35:42.5042 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 21 23:35:43.130223 amazon-ssm-agent[2036]: 2026-01-21 23:35:42.5042 INFO [Registrar] Starting registrar module Jan 21 23:35:43.231317 amazon-ssm-agent[2036]: 2026-01-21 23:35:42.5114 INFO [EC2Identity] Checking disk for registration info Jan 21 23:35:43.329639 amazon-ssm-agent[2036]: 2026-01-21 23:35:42.5115 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 21 23:35:43.430629 amazon-ssm-agent[2036]: 2026-01-21 23:35:42.5115 INFO [EC2Identity] Generating registration keypair Jan 21 23:35:43.530081 amazon-ssm-agent[2036]: 2026/01/21 23:35:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 23:35:43.530081 amazon-ssm-agent[2036]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 23:35:43.530318 amazon-ssm-agent[2036]: 2026/01/21 23:35:43 processing appconfig overrides Jan 21 23:35:43.530792 amazon-ssm-agent[2036]: 2026-01-21 23:35:43.4679 INFO [EC2Identity] Checking write access before registering Jan 21 23:35:43.563546 sshd_keygen[2019]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 21 23:35:43.573554 amazon-ssm-agent[2036]: 2026-01-21 23:35:43.4704 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 21 23:35:43.573554 amazon-ssm-agent[2036]: 2026-01-21 23:35:43.5295 INFO [EC2Identity] EC2 registration was successful. Jan 21 23:35:43.573740 amazon-ssm-agent[2036]: 2026-01-21 23:35:43.5295 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 21 23:35:43.573740 amazon-ssm-agent[2036]: 2026-01-21 23:35:43.5297 INFO [CredentialRefresher] credentialRefresher has started Jan 21 23:35:43.573740 amazon-ssm-agent[2036]: 2026-01-21 23:35:43.5297 INFO [CredentialRefresher] Starting credentials refresher loop Jan 21 23:35:43.573740 amazon-ssm-agent[2036]: 2026-01-21 23:35:43.5732 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 21 23:35:43.573740 amazon-ssm-agent[2036]: 2026-01-21 23:35:43.5734 INFO [CredentialRefresher] Credentials ready Jan 21 23:35:43.606689 tar[1982]: linux-arm64/README.md Jan 21 23:35:43.634070 amazon-ssm-agent[2036]: 2026-01-21 23:35:43.5736 INFO [CredentialRefresher] Next credential rotation will be in 29.9999925369 minutes Jan 21 23:35:43.639861 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 21 23:35:43.651425 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 21 23:35:43.656850 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 21 23:35:43.691674 systemd[1]: issuegen.service: Deactivated successfully. Jan 21 23:35:43.692359 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 21 23:35:43.702165 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 21 23:35:43.742382 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 21 23:35:43.755575 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 21 23:35:43.764774 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 21 23:35:43.769037 systemd[1]: Reached target getty.target - Login Prompts. Jan 21 23:35:44.570652 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 23:35:44.580419 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 21 23:35:44.584536 systemd[1]: Startup finished in 4.372s (kernel) + 12.112s (initrd) + 14.416s (userspace) = 30.901s. Jan 21 23:35:44.588204 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 23:35:44.627144 amazon-ssm-agent[2036]: 2026-01-21 23:35:44.6229 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 21 23:35:44.728292 amazon-ssm-agent[2036]: 2026-01-21 23:35:44.6293 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2239) started Jan 21 23:35:44.829591 amazon-ssm-agent[2036]: 2026-01-21 23:35:44.6293 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 21 23:35:45.546971 kubelet[2235]: E0121 23:35:45.546912 2235 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 23:35:45.551685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 23:35:45.552064 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 23:35:45.553024 systemd[1]: kubelet.service: Consumed 1.496s CPU time, 257.4M memory peak. Jan 21 23:35:46.515396 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 21 23:35:46.517912 systemd[1]: Started sshd@0-172.31.29.34:22-68.220.241.50:47104.service - OpenSSH per-connection server daemon (68.220.241.50:47104). Jan 21 23:35:47.229695 sshd[2260]: Accepted publickey for core from 68.220.241.50 port 47104 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:35:47.233602 sshd-session[2260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:35:47.247501 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 21 23:35:47.249441 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 21 23:35:47.263193 systemd-logind[1971]: New session 1 of user core. Jan 21 23:35:47.286348 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 21 23:35:47.293624 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 21 23:35:47.315148 (systemd)[2265]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 21 23:35:47.320458 systemd-logind[1971]: New session c1 of user core. Jan 21 23:35:47.606828 systemd[2265]: Queued start job for default target default.target. Jan 21 23:35:47.613542 systemd[2265]: Created slice app.slice - User Application Slice. Jan 21 23:35:47.613616 systemd[2265]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 21 23:35:47.613649 systemd[2265]: Reached target paths.target - Paths. Jan 21 23:35:47.613773 systemd[2265]: Reached target timers.target - Timers. Jan 21 23:35:47.616425 systemd[2265]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 21 23:35:47.618259 systemd[2265]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 21 23:35:47.648163 systemd[2265]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 21 23:35:47.648352 systemd[2265]: Reached target sockets.target - Sockets. Jan 21 23:35:47.653245 systemd[2265]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 21 23:35:47.653670 systemd[2265]: Reached target basic.target - Basic System. Jan 21 23:35:47.653826 systemd[2265]: Reached target default.target - Main User Target. Jan 21 23:35:47.653890 systemd[2265]: Startup finished in 321ms. Jan 21 23:35:47.654111 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 21 23:35:47.662350 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 21 23:35:47.915377 systemd[1]: Started sshd@1-172.31.29.34:22-68.220.241.50:47118.service - OpenSSH per-connection server daemon (68.220.241.50:47118). Jan 21 23:35:48.385898 sshd[2279]: Accepted publickey for core from 68.220.241.50 port 47118 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:35:48.388315 sshd-session[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:35:48.397275 systemd-logind[1971]: New session 2 of user core. Jan 21 23:35:48.404297 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 21 23:35:48.623704 sshd[2282]: Connection closed by 68.220.241.50 port 47118 Jan 21 23:35:48.624874 sshd-session[2279]: pam_unix(sshd:session): session closed for user core Jan 21 23:35:48.637578 systemd[1]: sshd@1-172.31.29.34:22-68.220.241.50:47118.service: Deactivated successfully. Jan 21 23:35:48.642243 systemd[1]: session-2.scope: Deactivated successfully. Jan 21 23:35:48.646457 systemd-logind[1971]: Session 2 logged out. Waiting for processes to exit. Jan 21 23:35:48.650555 systemd-logind[1971]: Removed session 2. Jan 21 23:35:48.730633 systemd[1]: Started sshd@2-172.31.29.34:22-68.220.241.50:47132.service - OpenSSH per-connection server daemon (68.220.241.50:47132). Jan 21 23:35:49.216651 sshd[2288]: Accepted publickey for core from 68.220.241.50 port 47132 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:35:49.218843 sshd-session[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:35:49.226796 systemd-logind[1971]: New session 3 of user core. Jan 21 23:35:49.237295 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 21 23:35:49.466880 sshd[2291]: Connection closed by 68.220.241.50 port 47132 Jan 21 23:35:49.466678 sshd-session[2288]: pam_unix(sshd:session): session closed for user core Jan 21 23:35:49.476180 systemd-logind[1971]: Session 3 logged out. Waiting for processes to exit. Jan 21 23:35:49.476689 systemd[1]: sshd@2-172.31.29.34:22-68.220.241.50:47132.service: Deactivated successfully. Jan 21 23:35:49.480090 systemd[1]: session-3.scope: Deactivated successfully. Jan 21 23:35:49.484081 systemd-logind[1971]: Removed session 3. Jan 21 23:35:49.562506 systemd[1]: Started sshd@3-172.31.29.34:22-68.220.241.50:47138.service - OpenSSH per-connection server daemon (68.220.241.50:47138). Jan 21 23:35:50.023664 sshd[2297]: Accepted publickey for core from 68.220.241.50 port 47138 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:35:50.026056 sshd-session[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:35:50.036031 systemd-logind[1971]: New session 4 of user core. Jan 21 23:35:50.047311 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 21 23:35:50.262210 sshd[2300]: Connection closed by 68.220.241.50 port 47138 Jan 21 23:35:50.262092 sshd-session[2297]: pam_unix(sshd:session): session closed for user core Jan 21 23:35:50.269605 systemd[1]: sshd@3-172.31.29.34:22-68.220.241.50:47138.service: Deactivated successfully. Jan 21 23:35:50.272609 systemd[1]: session-4.scope: Deactivated successfully. Jan 21 23:35:50.277362 systemd-logind[1971]: Session 4 logged out. Waiting for processes to exit. Jan 21 23:35:50.279564 systemd-logind[1971]: Removed session 4. Jan 21 23:35:50.358488 systemd[1]: Started sshd@4-172.31.29.34:22-68.220.241.50:47150.service - OpenSSH per-connection server daemon (68.220.241.50:47150). Jan 21 23:35:50.810570 sshd[2306]: Accepted publickey for core from 68.220.241.50 port 47150 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:35:50.812739 sshd-session[2306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:35:50.821145 systemd-logind[1971]: New session 5 of user core. Jan 21 23:35:50.835279 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 21 23:35:51.064000 sudo[2310]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 21 23:35:51.064627 sudo[2310]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 23:35:51.079497 sudo[2310]: pam_unix(sudo:session): session closed for user root Jan 21 23:35:51.157254 sshd[2309]: Connection closed by 68.220.241.50 port 47150 Jan 21 23:35:51.158453 sshd-session[2306]: pam_unix(sshd:session): session closed for user core Jan 21 23:35:51.166283 systemd[1]: sshd@4-172.31.29.34:22-68.220.241.50:47150.service: Deactivated successfully. Jan 21 23:35:51.171398 systemd[1]: session-5.scope: Deactivated successfully. Jan 21 23:35:51.173128 systemd-logind[1971]: Session 5 logged out. Waiting for processes to exit. Jan 21 23:35:51.176415 systemd-logind[1971]: Removed session 5. Jan 21 23:35:51.252370 systemd[1]: Started sshd@5-172.31.29.34:22-68.220.241.50:47154.service - OpenSSH per-connection server daemon (68.220.241.50:47154). Jan 21 23:35:51.708041 sshd[2317]: Accepted publickey for core from 68.220.241.50 port 47154 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:35:51.709568 sshd-session[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:35:51.717618 systemd-logind[1971]: New session 6 of user core. Jan 21 23:35:51.732621 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 21 23:35:51.874367 sudo[2322]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 21 23:35:51.875506 sudo[2322]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 23:35:51.886578 sudo[2322]: pam_unix(sudo:session): session closed for user root Jan 21 23:35:51.899245 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 21 23:35:51.899913 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 23:35:51.917522 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 21 23:35:51.976000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 21 23:35:51.979329 kernel: kauditd_printk_skb: 86 callbacks suppressed Jan 21 23:35:51.979437 kernel: audit: type=1305 audit(1769038551.976:242): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 21 23:35:51.979485 augenrules[2344]: No rules Jan 21 23:35:51.976000 audit[2344]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff5126cc0 a2=420 a3=0 items=0 ppid=2325 pid=2344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:51.985602 systemd[1]: audit-rules.service: Deactivated successfully. Jan 21 23:35:51.986421 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 21 23:35:51.991075 kernel: audit: type=1300 audit(1769038551.976:242): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff5126cc0 a2=420 a3=0 items=0 ppid=2325 pid=2344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:51.991206 kernel: audit: type=1327 audit(1769038551.976:242): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 21 23:35:51.976000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 21 23:35:51.990086 sudo[2321]: pam_unix(sudo:session): session closed for user root Jan 21 23:35:51.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:51.999162 kernel: audit: type=1130 audit(1769038551.986:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:51.999292 kernel: audit: type=1131 audit(1769038551.987:244): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:51.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:51.989000 audit[2321]: USER_END pid=2321 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 23:35:52.008712 kernel: audit: type=1106 audit(1769038551.989:245): pid=2321 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 23:35:52.008840 kernel: audit: type=1104 audit(1769038551.989:246): pid=2321 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 23:35:51.989000 audit[2321]: CRED_DISP pid=2321 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 23:35:52.069550 sshd[2320]: Connection closed by 68.220.241.50 port 47154 Jan 21 23:35:52.070404 sshd-session[2317]: pam_unix(sshd:session): session closed for user core Jan 21 23:35:52.071000 audit[2317]: USER_END pid=2317 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:35:52.079779 systemd[1]: sshd@5-172.31.29.34:22-68.220.241.50:47154.service: Deactivated successfully. Jan 21 23:35:52.072000 audit[2317]: CRED_DISP pid=2317 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:35:52.088894 kernel: audit: type=1106 audit(1769038552.071:247): pid=2317 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:35:52.089024 kernel: audit: type=1104 audit(1769038552.072:248): pid=2317 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:35:52.084463 systemd[1]: session-6.scope: Deactivated successfully. Jan 21 23:35:52.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.29.34:22-68.220.241.50:47154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:52.095459 systemd-logind[1971]: Session 6 logged out. Waiting for processes to exit. Jan 21 23:35:52.098604 kernel: audit: type=1131 audit(1769038552.079:249): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.29.34:22-68.220.241.50:47154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:52.097680 systemd-logind[1971]: Removed session 6. Jan 21 23:35:52.164034 systemd[1]: Started sshd@6-172.31.29.34:22-68.220.241.50:47158.service - OpenSSH per-connection server daemon (68.220.241.50:47158). Jan 21 23:35:52.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.29.34:22-68.220.241.50:47158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:52.632000 audit[2353]: USER_ACCT pid=2353 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:35:52.633802 sshd[2353]: Accepted publickey for core from 68.220.241.50 port 47158 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:35:52.634000 audit[2353]: CRED_ACQ pid=2353 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:35:52.634000 audit[2353]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc79e8330 a2=3 a3=0 items=0 ppid=1 pid=2353 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:52.634000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:35:52.636901 sshd-session[2353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:35:52.646623 systemd-logind[1971]: New session 7 of user core. Jan 21 23:35:52.668330 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 21 23:35:52.672000 audit[2353]: USER_START pid=2353 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:35:52.676000 audit[2356]: CRED_ACQ pid=2356 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:35:52.798530 sudo[2357]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 21 23:35:52.797000 audit[2357]: USER_ACCT pid=2357 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 23:35:52.797000 audit[2357]: CRED_REFR pid=2357 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 23:35:52.799196 sudo[2357]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 23:35:52.801000 audit[2357]: USER_START pid=2357 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 23:35:54.100645 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 21 23:35:54.123508 (dockerd)[2375]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 21 23:35:55.232309 dockerd[2375]: time="2026-01-21T23:35:55.231964083Z" level=info msg="Starting up" Jan 21 23:35:55.239209 dockerd[2375]: time="2026-01-21T23:35:55.239164105Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 21 23:35:55.259192 dockerd[2375]: time="2026-01-21T23:35:55.259105545Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 21 23:35:55.294849 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3524526619-merged.mount: Deactivated successfully. Jan 21 23:35:55.343111 dockerd[2375]: time="2026-01-21T23:35:55.343045686Z" level=info msg="Loading containers: start." Jan 21 23:35:55.359009 kernel: Initializing XFRM netlink socket Jan 21 23:35:55.509000 audit[2424]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2424 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:35:55.509000 audit[2424]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffffbc79880 a2=0 a3=0 items=0 ppid=2375 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.509000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 21 23:35:55.513000 audit[2426]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2426 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:35:55.513000 audit[2426]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd3c623a0 a2=0 a3=0 items=0 ppid=2375 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.513000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 21 23:35:55.517000 audit[2428]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:35:55.517000 audit[2428]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc3216950 a2=0 a3=0 items=0 ppid=2375 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.517000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 21 23:35:55.521000 audit[2430]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2430 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:35:55.521000 audit[2430]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcf6471d0 a2=0 a3=0 items=0 ppid=2375 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.521000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 21 23:35:55.526000 audit[2432]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2432 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:35:55.526000 audit[2432]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffbc11ca0 a2=0 a3=0 items=0 ppid=2375 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.526000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 21 23:35:55.530000 audit[2434]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2434 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:35:55.530000 audit[2434]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff169ede0 a2=0 a3=0 items=0 ppid=2375 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.530000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 21 23:35:55.534000 audit[2436]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2436 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:35:55.534000 audit[2436]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff6b4d700 a2=0 a3=0 items=0 ppid=2375 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.534000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 21 23:35:55.538000 audit[2438]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:35:55.538000 audit[2438]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffc63aa4d0 a2=0 a3=0 items=0 ppid=2375 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.538000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 21 23:35:55.578000 audit[2441]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2441 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:35:55.578000 audit[2441]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffd4a78610 a2=0 a3=0 items=0 ppid=2375 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.578000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 21 23:35:55.582000 audit[2443]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:35:55.582000 audit[2443]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe145d9e0 a2=0 a3=0 items=0 ppid=2375 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.582000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 21 23:35:55.586000 audit[2445]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:35:55.586000 audit[2445]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffe5162240 a2=0 a3=0 items=0 ppid=2375 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.586000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 21 23:35:55.590000 audit[2447]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2447 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:35:55.590000 audit[2447]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc337dab0 a2=0 a3=0 items=0 ppid=2375 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.590000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 21 23:35:55.594000 audit[2449]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:35:55.594000 audit[2449]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffc657d890 a2=0 a3=0 items=0 ppid=2375 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.594000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 21 23:35:55.663000 audit[2479]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:35:55.663000 audit[2479]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffff5086830 a2=0 a3=0 items=0 ppid=2375 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.663000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 21 23:35:55.667000 audit[2481]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:35:55.667000 audit[2481]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffffc97e2a0 a2=0 a3=0 items=0 ppid=2375 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.667000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 21 23:35:55.671000 audit[2483]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:35:55.671000 audit[2483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffd287da0 a2=0 a3=0 items=0 ppid=2375 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.671000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 21 23:35:55.675000 audit[2485]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2485 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:35:55.675000 audit[2485]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd4748cf0 a2=0 a3=0 items=0 ppid=2375 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.675000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 21 23:35:55.679000 audit[2487]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2487 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:35:55.679000 audit[2487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdbaeb710 a2=0 a3=0 items=0 ppid=2375 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.679000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 21 23:35:55.683000 audit[2489]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:35:55.683000 audit[2489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffffb8ec8f0 a2=0 a3=0 items=0 ppid=2375 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.683000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 21 23:35:55.688000 audit[2491]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:35:55.688000 audit[2491]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe0c97fd0 a2=0 a3=0 items=0 ppid=2375 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.688000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 21 23:35:55.692000 audit[2493]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:35:55.692000 audit[2493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffcc156a80 a2=0 a3=0 items=0 ppid=2375 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.692000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 21 23:35:55.696000 audit[2495]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:35:55.696000 audit[2495]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffd07a4d00 a2=0 a3=0 items=0 ppid=2375 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.696000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 21 23:35:55.701000 audit[2497]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2497 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:35:55.701000 audit[2497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffdc5e6ab0 a2=0 a3=0 items=0 ppid=2375 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.701000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 21 23:35:55.705000 audit[2499]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2499 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:35:55.705000 audit[2499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffec6b3450 a2=0 a3=0 items=0 ppid=2375 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.705000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 21 23:35:55.709000 audit[2501]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2501 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:35:55.709000 audit[2501]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc1cf79d0 a2=0 a3=0 items=0 ppid=2375 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.709000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 21 23:35:55.713000 audit[2503]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2503 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:35:55.713000 audit[2503]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffe4bf2a70 a2=0 a3=0 items=0 ppid=2375 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.713000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 21 23:35:55.725000 audit[2508]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:35:55.725000 audit[2508]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdbad6f40 a2=0 a3=0 items=0 ppid=2375 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.725000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 21 23:35:55.730000 audit[2510]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2510 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:35:55.730000 audit[2510]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffe9398ca0 a2=0 a3=0 items=0 ppid=2375 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.730000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 21 23:35:55.733000 audit[2512]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:35:55.733000 audit[2512]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd76c6730 a2=0 a3=0 items=0 ppid=2375 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.733000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 21 23:35:55.738000 audit[2514]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2514 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:35:55.738000 audit[2514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffbca3690 a2=0 a3=0 items=0 ppid=2375 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.738000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 21 23:35:55.743000 audit[2516]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2516 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:35:55.743000 audit[2516]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffffe93c540 a2=0 a3=0 items=0 ppid=2375 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.743000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 21 23:35:55.747000 audit[2518]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2518 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:35:55.747000 audit[2518]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffff6fc0ad0 a2=0 a3=0 items=0 ppid=2375 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.747000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 21 23:35:55.768519 (udev-worker)[2396]: Network interface NamePolicy= disabled on kernel command line. Jan 21 23:35:55.790733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 21 23:35:55.792000 audit[2522]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:35:55.792000 audit[2522]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=fffffa986180 a2=0 a3=0 items=0 ppid=2375 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.792000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 21 23:35:55.795781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 23:35:55.800000 audit[2525]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:35:55.800000 audit[2525]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=fffffba8c5c0 a2=0 a3=0 items=0 ppid=2375 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.800000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 21 23:35:55.825000 audit[2535]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:35:55.825000 audit[2535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffdef7cc20 a2=0 a3=0 items=0 ppid=2375 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.825000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 21 23:35:55.845000 audit[2541]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:35:55.845000 audit[2541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff955b2c0 a2=0 a3=0 items=0 ppid=2375 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.845000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 21 23:35:55.851000 audit[2543]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:35:55.851000 audit[2543]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=fffffa7a8620 a2=0 a3=0 items=0 ppid=2375 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.851000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 21 23:35:55.856000 audit[2545]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2545 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:35:55.856000 audit[2545]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffdb6014f0 a2=0 a3=0 items=0 ppid=2375 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.856000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 21 23:35:55.861000 audit[2547]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2547 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:35:55.861000 audit[2547]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffee5d2a60 a2=0 a3=0 items=0 ppid=2375 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.861000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 21 23:35:55.866000 audit[2549]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:35:55.866000 audit[2549]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff3ef4530 a2=0 a3=0 items=0 ppid=2375 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:35:55.866000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 21 23:35:55.869301 systemd-networkd[1597]: docker0: Link UP Jan 21 23:35:55.891270 dockerd[2375]: time="2026-01-21T23:35:55.891088454Z" level=info msg="Loading containers: done." Jan 21 23:35:55.920393 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck261803067-merged.mount: Deactivated successfully. Jan 21 23:35:55.976825 dockerd[2375]: time="2026-01-21T23:35:55.976056493Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 21 23:35:55.976825 dockerd[2375]: time="2026-01-21T23:35:55.976227599Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 21 23:35:55.976825 dockerd[2375]: time="2026-01-21T23:35:55.976535329Z" level=info msg="Initializing buildkit" Jan 21 23:35:56.044939 dockerd[2375]: time="2026-01-21T23:35:56.044805489Z" level=info msg="Completed buildkit initialization" Jan 21 23:35:56.062205 dockerd[2375]: time="2026-01-21T23:35:56.062140921Z" level=info msg="Daemon has completed initialization" Jan 21 23:35:56.062700 dockerd[2375]: time="2026-01-21T23:35:56.062454636Z" level=info msg="API listen on /run/docker.sock" Jan 21 23:35:56.065950 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 21 23:35:56.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:56.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:35:56.245390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 23:35:56.273508 (kubelet)[2591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 23:35:56.372121 kubelet[2591]: E0121 23:35:56.371949 2591 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 23:35:56.380196 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 23:35:56.380518 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 23:35:56.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 23:35:56.381725 systemd[1]: kubelet.service: Consumed 352ms CPU time, 107.5M memory peak. Jan 21 23:35:57.246086 containerd[2003]: time="2026-01-21T23:35:57.246031015Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 21 23:35:57.979049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2272539230.mount: Deactivated successfully. Jan 21 23:35:59.391906 containerd[2003]: time="2026-01-21T23:35:59.391814677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:35:59.394115 containerd[2003]: time="2026-01-21T23:35:59.394014053Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=24845792" Jan 21 23:35:59.398026 containerd[2003]: time="2026-01-21T23:35:59.396924505Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:35:59.405129 containerd[2003]: time="2026-01-21T23:35:59.405041349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:35:59.409066 containerd[2003]: time="2026-01-21T23:35:59.408585252Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.16248809s" Jan 21 23:35:59.409066 containerd[2003]: time="2026-01-21T23:35:59.408659579Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 21 23:35:59.409945 containerd[2003]: time="2026-01-21T23:35:59.409801575Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 21 23:36:01.363120 containerd[2003]: time="2026-01-21T23:36:01.362647450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:36:01.365039 containerd[2003]: time="2026-01-21T23:36:01.364653110Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22613932" Jan 21 23:36:01.366373 containerd[2003]: time="2026-01-21T23:36:01.366265884Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:36:01.372318 containerd[2003]: time="2026-01-21T23:36:01.371589445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:36:01.374407 containerd[2003]: time="2026-01-21T23:36:01.374328334Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.964361997s" Jan 21 23:36:01.374875 containerd[2003]: time="2026-01-21T23:36:01.374666086Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 21 23:36:01.376269 containerd[2003]: time="2026-01-21T23:36:01.376114865Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 21 23:36:02.829631 containerd[2003]: time="2026-01-21T23:36:02.829536587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:36:02.831427 containerd[2003]: time="2026-01-21T23:36:02.831029564Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17608611" Jan 21 23:36:02.832555 containerd[2003]: time="2026-01-21T23:36:02.832482397Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:36:02.838020 containerd[2003]: time="2026-01-21T23:36:02.837939643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:36:02.840253 containerd[2003]: time="2026-01-21T23:36:02.840172326Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.463987296s" Jan 21 23:36:02.840253 containerd[2003]: time="2026-01-21T23:36:02.840240428Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 21 23:36:02.841280 containerd[2003]: time="2026-01-21T23:36:02.841208644Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 21 23:36:04.167176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount39658472.mount: Deactivated successfully. Jan 21 23:36:04.866721 containerd[2003]: time="2026-01-21T23:36:04.866657823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:36:04.878251 containerd[2003]: time="2026-01-21T23:36:04.878059607Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=17713718" Jan 21 23:36:04.889810 containerd[2003]: time="2026-01-21T23:36:04.889679682Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:36:04.894330 containerd[2003]: time="2026-01-21T23:36:04.894226931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:36:04.895734 containerd[2003]: time="2026-01-21T23:36:04.895342661Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 2.054062341s" Jan 21 23:36:04.895734 containerd[2003]: time="2026-01-21T23:36:04.895411099Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 21 23:36:04.896457 containerd[2003]: time="2026-01-21T23:36:04.896385383Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 21 23:36:05.634768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1596507072.mount: Deactivated successfully. Jan 21 23:36:06.631233 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 21 23:36:06.636367 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 23:36:07.125221 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 21 23:36:07.125349 kernel: audit: type=1130 audit(1769038567.116:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:36:07.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:36:07.117458 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 23:36:07.134329 (kubelet)[2735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 23:36:07.259639 kubelet[2735]: E0121 23:36:07.259540 2735 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 23:36:07.264091 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 23:36:07.264402 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 23:36:07.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 23:36:07.267154 systemd[1]: kubelet.service: Consumed 374ms CPU time, 105M memory peak. Jan 21 23:36:07.274043 kernel: audit: type=1131 audit(1769038567.266:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 23:36:07.322040 containerd[2003]: time="2026-01-21T23:36:07.321712472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:36:07.334021 containerd[2003]: time="2026-01-21T23:36:07.333476116Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=15956282" Jan 21 23:36:07.349077 containerd[2003]: time="2026-01-21T23:36:07.349004592Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:36:07.366654 containerd[2003]: time="2026-01-21T23:36:07.366581799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:36:07.369526 containerd[2003]: time="2026-01-21T23:36:07.369168725Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.471592037s" Jan 21 23:36:07.369526 containerd[2003]: time="2026-01-21T23:36:07.369228047Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 21 23:36:07.369815 containerd[2003]: time="2026-01-21T23:36:07.369787363Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 21 23:36:07.883686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2660322347.mount: Deactivated successfully. Jan 21 23:36:07.898047 containerd[2003]: time="2026-01-21T23:36:07.897702003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 21 23:36:07.899844 containerd[2003]: time="2026-01-21T23:36:07.899744354Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 21 23:36:07.902398 containerd[2003]: time="2026-01-21T23:36:07.902326506Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 21 23:36:07.907236 containerd[2003]: time="2026-01-21T23:36:07.907106235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 21 23:36:07.910677 containerd[2003]: time="2026-01-21T23:36:07.910596093Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 540.761605ms" Jan 21 23:36:07.910677 containerd[2003]: time="2026-01-21T23:36:07.910663475Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 21 23:36:07.911284 containerd[2003]: time="2026-01-21T23:36:07.911248003Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 21 23:36:08.526609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3065898082.mount: Deactivated successfully. Jan 21 23:36:11.886041 containerd[2003]: time="2026-01-21T23:36:11.885682991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:36:11.887788 containerd[2003]: time="2026-01-21T23:36:11.887695237Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=66060366" Jan 21 23:36:11.889619 containerd[2003]: time="2026-01-21T23:36:11.889542528Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:36:11.896129 containerd[2003]: time="2026-01-21T23:36:11.896075864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:36:11.897215 containerd[2003]: time="2026-01-21T23:36:11.897159066Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.985864071s" Jan 21 23:36:11.897341 containerd[2003]: time="2026-01-21T23:36:11.897212548Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 21 23:36:12.795950 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 21 23:36:12.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:36:12.807013 kernel: audit: type=1131 audit(1769038572.796:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:36:12.814000 audit: BPF prog-id=66 op=UNLOAD Jan 21 23:36:12.817052 kernel: audit: type=1334 audit(1769038572.814:305): prog-id=66 op=UNLOAD Jan 21 23:36:17.299855 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 21 23:36:17.306478 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 23:36:17.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:36:17.644307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 23:36:17.654221 kernel: audit: type=1130 audit(1769038577.643:306): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:36:17.654767 (kubelet)[2831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 23:36:17.729222 kubelet[2831]: E0121 23:36:17.729163 2831 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 23:36:17.735717 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 23:36:17.736254 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 23:36:17.737499 systemd[1]: kubelet.service: Consumed 299ms CPU time, 107.2M memory peak. Jan 21 23:36:17.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 23:36:17.745309 kernel: audit: type=1131 audit(1769038577.736:307): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 23:36:20.554079 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 23:36:20.555322 systemd[1]: kubelet.service: Consumed 299ms CPU time, 107.2M memory peak. Jan 21 23:36:20.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:36:20.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:36:20.564369 kernel: audit: type=1130 audit(1769038580.554:308): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:36:20.564503 kernel: audit: type=1131 audit(1769038580.554:309): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:36:20.566035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 23:36:20.613251 systemd[1]: Reload requested from client PID 2845 ('systemctl') (unit session-7.scope)... Jan 21 23:36:20.613285 systemd[1]: Reloading... Jan 21 23:36:20.879033 zram_generator::config[2894]: No configuration found. Jan 21 23:36:21.365537 systemd[1]: Reloading finished in 751 ms. Jan 21 23:36:21.434238 kernel: audit: type=1334 audit(1769038581.428:310): prog-id=70 op=LOAD Jan 21 23:36:21.434410 kernel: audit: type=1334 audit(1769038581.428:311): prog-id=63 op=UNLOAD Jan 21 23:36:21.428000 audit: BPF prog-id=70 op=LOAD Jan 21 23:36:21.428000 audit: BPF prog-id=63 op=UNLOAD Jan 21 23:36:21.437527 kernel: audit: type=1334 audit(1769038581.431:312): prog-id=71 op=LOAD Jan 21 23:36:21.431000 audit: BPF prog-id=71 op=LOAD Jan 21 23:36:21.440548 kernel: audit: type=1334 audit(1769038581.431:313): prog-id=72 op=LOAD Jan 21 23:36:21.431000 audit: BPF prog-id=72 op=LOAD Jan 21 23:36:21.443558 kernel: audit: type=1334 audit(1769038581.431:314): prog-id=64 op=UNLOAD Jan 21 23:36:21.431000 audit: BPF prog-id=64 op=UNLOAD Jan 21 23:36:21.447018 kernel: audit: type=1334 audit(1769038581.431:315): prog-id=65 op=UNLOAD Jan 21 23:36:21.431000 audit: BPF prog-id=65 op=UNLOAD Jan 21 23:36:21.449510 kernel: audit: type=1334 audit(1769038581.434:316): prog-id=73 op=LOAD Jan 21 23:36:21.434000 audit: BPF prog-id=73 op=LOAD Jan 21 23:36:21.434000 audit: BPF prog-id=74 op=LOAD Jan 21 23:36:21.451572 kernel: audit: type=1334 audit(1769038581.434:317): prog-id=74 op=LOAD Jan 21 23:36:21.434000 audit: BPF prog-id=53 op=UNLOAD Jan 21 23:36:21.434000 audit: BPF prog-id=54 op=UNLOAD Jan 21 23:36:21.437000 audit: BPF prog-id=75 op=LOAD Jan 21 23:36:21.437000 audit: BPF prog-id=50 op=UNLOAD Jan 21 23:36:21.437000 audit: BPF prog-id=76 op=LOAD Jan 21 23:36:21.437000 audit: BPF prog-id=77 op=LOAD Jan 21 23:36:21.437000 audit: BPF prog-id=51 op=UNLOAD Jan 21 23:36:21.437000 audit: BPF prog-id=52 op=UNLOAD Jan 21 23:36:21.443000 audit: BPF prog-id=78 op=LOAD Jan 21 23:36:21.443000 audit: BPF prog-id=55 op=UNLOAD Jan 21 23:36:21.446000 audit: BPF prog-id=79 op=LOAD Jan 21 23:36:21.446000 audit: BPF prog-id=60 op=UNLOAD Jan 21 23:36:21.446000 audit: BPF prog-id=80 op=LOAD Jan 21 23:36:21.446000 audit: BPF prog-id=81 op=LOAD Jan 21 23:36:21.446000 audit: BPF prog-id=61 op=UNLOAD Jan 21 23:36:21.446000 audit: BPF prog-id=62 op=UNLOAD Jan 21 23:36:21.449000 audit: BPF prog-id=82 op=LOAD Jan 21 23:36:21.449000 audit: BPF prog-id=47 op=UNLOAD Jan 21 23:36:21.449000 audit: BPF prog-id=83 op=LOAD Jan 21 23:36:21.449000 audit: BPF prog-id=84 op=LOAD Jan 21 23:36:21.449000 audit: BPF prog-id=48 op=UNLOAD Jan 21 23:36:21.449000 audit: BPF prog-id=49 op=UNLOAD Jan 21 23:36:21.454000 audit: BPF prog-id=85 op=LOAD Jan 21 23:36:21.454000 audit: BPF prog-id=59 op=UNLOAD Jan 21 23:36:21.455000 audit: BPF prog-id=86 op=LOAD Jan 21 23:36:21.455000 audit: BPF prog-id=69 op=UNLOAD Jan 21 23:36:21.458000 audit: BPF prog-id=87 op=LOAD Jan 21 23:36:21.458000 audit: BPF prog-id=56 op=UNLOAD Jan 21 23:36:21.458000 audit: BPF prog-id=88 op=LOAD Jan 21 23:36:21.458000 audit: BPF prog-id=89 op=LOAD Jan 21 23:36:21.458000 audit: BPF prog-id=57 op=UNLOAD Jan 21 23:36:21.458000 audit: BPF prog-id=58 op=UNLOAD Jan 21 23:36:21.486011 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 21 23:36:21.486190 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 21 23:36:21.486922 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 23:36:21.487057 systemd[1]: kubelet.service: Consumed 232ms CPU time, 95.1M memory peak. Jan 21 23:36:21.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 23:36:21.490544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 23:36:21.824931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 23:36:21.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:36:21.851788 (kubelet)[2954]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 21 23:36:21.923094 kubelet[2954]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 23:36:21.923094 kubelet[2954]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 21 23:36:21.924012 kubelet[2954]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 23:36:21.924012 kubelet[2954]: I0121 23:36:21.923759 2954 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 23:36:23.581012 kubelet[2954]: I0121 23:36:23.580262 2954 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 21 23:36:23.581012 kubelet[2954]: I0121 23:36:23.580316 2954 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 23:36:23.581919 kubelet[2954]: I0121 23:36:23.581873 2954 server.go:954] "Client rotation is on, will bootstrap in background" Jan 21 23:36:23.641550 kubelet[2954]: E0121 23:36:23.641487 2954 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.29.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.34:6443: connect: connection refused" logger="UnhandledError" Jan 21 23:36:23.644150 kubelet[2954]: I0121 23:36:23.643807 2954 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 21 23:36:23.656191 kubelet[2954]: I0121 23:36:23.656147 2954 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 23:36:23.663773 kubelet[2954]: I0121 23:36:23.663470 2954 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 21 23:36:23.664315 kubelet[2954]: I0121 23:36:23.664261 2954 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 23:36:23.664707 kubelet[2954]: I0121 23:36:23.664419 2954 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-34","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 23:36:23.665093 kubelet[2954]: I0121 23:36:23.665072 2954 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 23:36:23.665215 kubelet[2954]: I0121 23:36:23.665196 2954 container_manager_linux.go:304] "Creating device plugin manager" Jan 21 23:36:23.665622 kubelet[2954]: I0121 23:36:23.665603 2954 state_mem.go:36] "Initialized new in-memory state store" Jan 21 23:36:23.672045 kubelet[2954]: I0121 23:36:23.671635 2954 kubelet.go:446] "Attempting to sync node with API server" Jan 21 23:36:23.672045 kubelet[2954]: I0121 23:36:23.671724 2954 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 23:36:23.672045 kubelet[2954]: I0121 23:36:23.671769 2954 kubelet.go:352] "Adding apiserver pod source" Jan 21 23:36:23.672045 kubelet[2954]: I0121 23:36:23.671789 2954 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 23:36:23.680017 kubelet[2954]: W0121 23:36:23.679041 2954 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-34&limit=500&resourceVersion=0": dial tcp 172.31.29.34:6443: connect: connection refused Jan 21 23:36:23.680017 kubelet[2954]: E0121 23:36:23.679143 2954 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.29.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-34&limit=500&resourceVersion=0\": dial tcp 172.31.29.34:6443: connect: connection refused" logger="UnhandledError" Jan 21 23:36:23.680017 kubelet[2954]: I0121 23:36:23.679266 2954 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 21 23:36:23.680660 kubelet[2954]: I0121 23:36:23.680628 2954 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 21 23:36:23.681001 kubelet[2954]: W0121 23:36:23.680956 2954 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 21 23:36:23.685387 kubelet[2954]: I0121 23:36:23.685349 2954 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 21 23:36:23.685707 kubelet[2954]: I0121 23:36:23.685683 2954 server.go:1287] "Started kubelet" Jan 21 23:36:23.691046 kubelet[2954]: I0121 23:36:23.690861 2954 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 23:36:23.691446 kubelet[2954]: I0121 23:36:23.691404 2954 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 23:36:23.694141 kubelet[2954]: I0121 23:36:23.694074 2954 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 23:36:23.705024 kubelet[2954]: I0121 23:36:23.704178 2954 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 21 23:36:23.705024 kubelet[2954]: E0121 23:36:23.704136 2954 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.34:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-34.188ce338168b04ba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-34,UID:ip-172-31-29-34,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-34,},FirstTimestamp:2026-01-21 23:36:23.685645498 +0000 UTC m=+1.827190038,LastTimestamp:2026-01-21 23:36:23.685645498 +0000 UTC m=+1.827190038,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-34,}" Jan 21 23:36:23.705024 kubelet[2954]: W0121 23:36:23.704758 2954 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.29.34:6443: connect: connection refused Jan 21 23:36:23.705024 kubelet[2954]: E0121 23:36:23.704833 2954 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.29.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.34:6443: connect: connection refused" logger="UnhandledError" Jan 21 23:36:23.708383 kubelet[2954]: I0121 23:36:23.708318 2954 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 21 23:36:23.708892 kubelet[2954]: E0121 23:36:23.708838 2954 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-29-34\" not found" Jan 21 23:36:23.709833 kubelet[2954]: I0121 23:36:23.709765 2954 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 21 23:36:23.709966 kubelet[2954]: I0121 23:36:23.709879 2954 reconciler.go:26] "Reconciler: start to sync state" Jan 21 23:36:23.709000 audit[2966]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2966 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:23.709000 audit[2966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe8368740 a2=0 a3=0 items=0 ppid=2954 pid=2966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:23.709000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 21 23:36:23.713027 kubelet[2954]: I0121 23:36:23.712921 2954 factory.go:221] Registration of the systemd container factory successfully Jan 21 23:36:23.713170 kubelet[2954]: I0121 23:36:23.713122 2954 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 21 23:36:23.713889 kubelet[2954]: W0121 23:36:23.713800 2954 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.34:6443: connect: connection refused Jan 21 23:36:23.714076 kubelet[2954]: E0121 23:36:23.713898 2954 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.29.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.34:6443: connect: connection refused" logger="UnhandledError" Jan 21 23:36:23.714166 kubelet[2954]: E0121 23:36:23.714060 2954 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-34?timeout=10s\": dial tcp 172.31.29.34:6443: connect: connection refused" interval="200ms" Jan 21 23:36:23.713000 audit[2967]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2967 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:23.713000 audit[2967]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc2cfa930 a2=0 a3=0 items=0 ppid=2954 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:23.713000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 21 23:36:23.717776 kubelet[2954]: I0121 23:36:23.717721 2954 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 23:36:23.719571 kubelet[2954]: I0121 23:36:23.719512 2954 factory.go:221] Registration of the containerd container factory successfully Jan 21 23:36:23.719894 kubelet[2954]: I0121 23:36:23.719868 2954 server.go:479] "Adding debug handlers to kubelet server" Jan 21 23:36:23.722000 audit[2969]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2969 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:23.722000 audit[2969]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffdd085510 a2=0 a3=0 items=0 ppid=2954 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:23.722000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 23:36:23.728000 audit[2971]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2971 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:23.728000 audit[2971]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffcf9f57f0 a2=0 a3=0 items=0 ppid=2954 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:23.728000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 23:36:23.740000 audit[2974]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2974 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:23.740000 audit[2974]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff4177260 a2=0 a3=0 items=0 ppid=2954 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:23.740000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 21 23:36:23.743608 kubelet[2954]: I0121 23:36:23.743216 2954 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 21 23:36:23.744000 audit[2975]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2975 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:23.744000 audit[2975]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe5f08300 a2=0 a3=0 items=0 ppid=2954 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:23.744000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 21 23:36:23.746531 kubelet[2954]: I0121 23:36:23.745734 2954 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 21 23:36:23.746531 kubelet[2954]: I0121 23:36:23.745789 2954 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 21 23:36:23.746531 kubelet[2954]: I0121 23:36:23.745822 2954 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 21 23:36:23.746531 kubelet[2954]: I0121 23:36:23.745837 2954 kubelet.go:2382] "Starting kubelet main sync loop" Jan 21 23:36:23.746531 kubelet[2954]: E0121 23:36:23.745904 2954 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 23:36:23.746000 audit[2977]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2977 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:23.746000 audit[2977]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd577e010 a2=0 a3=0 items=0 ppid=2954 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:23.746000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 21 23:36:23.749000 audit[2978]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2978 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:23.749000 audit[2978]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffff517bd0 a2=0 a3=0 items=0 ppid=2954 pid=2978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:23.749000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 21 23:36:23.750000 audit[2980]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:23.756011 kubelet[2954]: W0121 23:36:23.755649 2954 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.34:6443: connect: connection refused Jan 21 23:36:23.750000 audit[2980]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff58bdb30 a2=0 a3=0 items=0 ppid=2954 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:23.750000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 21 23:36:23.757000 audit[2981]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2981 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:23.757000 audit[2981]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffccec7270 a2=0 a3=0 items=0 ppid=2954 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:23.757000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 21 23:36:23.759388 kubelet[2954]: E0121 23:36:23.759342 2954 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.29.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.34:6443: connect: connection refused" logger="UnhandledError" Jan 21 23:36:23.760083 kubelet[2954]: E0121 23:36:23.759565 2954 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 21 23:36:23.763000 audit[2984]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:23.763000 audit[2984]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc6bb2c70 a2=0 a3=0 items=0 ppid=2954 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:23.763000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 21 23:36:23.766000 audit[2985]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2985 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:23.766000 audit[2985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc90ba500 a2=0 a3=0 items=0 ppid=2954 pid=2985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:23.766000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 21 23:36:23.772661 kubelet[2954]: I0121 23:36:23.772620 2954 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 21 23:36:23.772661 kubelet[2954]: I0121 23:36:23.772659 2954 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 21 23:36:23.772875 kubelet[2954]: I0121 23:36:23.772695 2954 state_mem.go:36] "Initialized new in-memory state store" Jan 21 23:36:23.778991 kubelet[2954]: I0121 23:36:23.778942 2954 policy_none.go:49] "None policy: Start" Jan 21 23:36:23.779165 kubelet[2954]: I0121 23:36:23.779008 2954 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 21 23:36:23.779165 kubelet[2954]: I0121 23:36:23.779035 2954 state_mem.go:35] "Initializing new in-memory state store" Jan 21 23:36:23.793047 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 21 23:36:23.809573 kubelet[2954]: E0121 23:36:23.809519 2954 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-29-34\" not found" Jan 21 23:36:23.810607 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 21 23:36:23.818238 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 21 23:36:23.840771 kubelet[2954]: I0121 23:36:23.840641 2954 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 21 23:36:23.841453 kubelet[2954]: I0121 23:36:23.841366 2954 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 23:36:23.841453 kubelet[2954]: I0121 23:36:23.841394 2954 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 23:36:23.842520 kubelet[2954]: I0121 23:36:23.842438 2954 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 23:36:23.851020 kubelet[2954]: E0121 23:36:23.850056 2954 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 21 23:36:23.851020 kubelet[2954]: E0121 23:36:23.850154 2954 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-34\" not found" Jan 21 23:36:23.872292 systemd[1]: Created slice kubepods-burstable-podf9ada2693492e169b86d8f2f719b2ec6.slice - libcontainer container kubepods-burstable-podf9ada2693492e169b86d8f2f719b2ec6.slice. Jan 21 23:36:23.893509 kubelet[2954]: E0121 23:36:23.893446 2954 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-34\" not found" node="ip-172-31-29-34" Jan 21 23:36:23.901379 systemd[1]: Created slice kubepods-burstable-pod3325061b5e1189b4b573cf8b3ef3a10c.slice - libcontainer container kubepods-burstable-pod3325061b5e1189b4b573cf8b3ef3a10c.slice. Jan 21 23:36:23.910704 kubelet[2954]: I0121 23:36:23.910632 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3325061b5e1189b4b573cf8b3ef3a10c-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-34\" (UID: \"3325061b5e1189b4b573cf8b3ef3a10c\") " pod="kube-system/kube-controller-manager-ip-172-31-29-34" Jan 21 23:36:23.910704 kubelet[2954]: I0121 23:36:23.910702 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3325061b5e1189b4b573cf8b3ef3a10c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-34\" (UID: \"3325061b5e1189b4b573cf8b3ef3a10c\") " pod="kube-system/kube-controller-manager-ip-172-31-29-34" Jan 21 23:36:23.910922 kubelet[2954]: I0121 23:36:23.910745 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f9ada2693492e169b86d8f2f719b2ec6-ca-certs\") pod \"kube-apiserver-ip-172-31-29-34\" (UID: \"f9ada2693492e169b86d8f2f719b2ec6\") " pod="kube-system/kube-apiserver-ip-172-31-29-34" Jan 21 23:36:23.910922 kubelet[2954]: I0121 23:36:23.910781 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f9ada2693492e169b86d8f2f719b2ec6-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-34\" (UID: \"f9ada2693492e169b86d8f2f719b2ec6\") " pod="kube-system/kube-apiserver-ip-172-31-29-34" Jan 21 23:36:23.910922 kubelet[2954]: I0121 23:36:23.910820 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f9ada2693492e169b86d8f2f719b2ec6-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-34\" (UID: \"f9ada2693492e169b86d8f2f719b2ec6\") " pod="kube-system/kube-apiserver-ip-172-31-29-34" Jan 21 23:36:23.910922 kubelet[2954]: I0121 23:36:23.910860 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3325061b5e1189b4b573cf8b3ef3a10c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-34\" (UID: \"3325061b5e1189b4b573cf8b3ef3a10c\") " pod="kube-system/kube-controller-manager-ip-172-31-29-34" Jan 21 23:36:23.910922 kubelet[2954]: I0121 23:36:23.910897 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3325061b5e1189b4b573cf8b3ef3a10c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-34\" (UID: \"3325061b5e1189b4b573cf8b3ef3a10c\") " pod="kube-system/kube-controller-manager-ip-172-31-29-34" Jan 21 23:36:23.911241 kubelet[2954]: I0121 23:36:23.910933 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3325061b5e1189b4b573cf8b3ef3a10c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-34\" (UID: \"3325061b5e1189b4b573cf8b3ef3a10c\") " pod="kube-system/kube-controller-manager-ip-172-31-29-34" Jan 21 23:36:23.911241 kubelet[2954]: I0121 23:36:23.910990 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/769d41b131da27c3c5c292e8eb54c691-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-34\" (UID: \"769d41b131da27c3c5c292e8eb54c691\") " pod="kube-system/kube-scheduler-ip-172-31-29-34" Jan 21 23:36:23.914666 kubelet[2954]: E0121 23:36:23.914584 2954 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-34?timeout=10s\": dial tcp 172.31.29.34:6443: connect: connection refused" interval="400ms" Jan 21 23:36:23.915449 kubelet[2954]: E0121 23:36:23.915395 2954 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-34\" not found" node="ip-172-31-29-34" Jan 21 23:36:23.923160 systemd[1]: Created slice kubepods-burstable-pod769d41b131da27c3c5c292e8eb54c691.slice - libcontainer container kubepods-burstable-pod769d41b131da27c3c5c292e8eb54c691.slice. Jan 21 23:36:23.927556 kubelet[2954]: E0121 23:36:23.927511 2954 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-34\" not found" node="ip-172-31-29-34" Jan 21 23:36:23.944812 kubelet[2954]: I0121 23:36:23.944748 2954 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-34" Jan 21 23:36:23.945760 kubelet[2954]: E0121 23:36:23.945708 2954 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.34:6443/api/v1/nodes\": dial tcp 172.31.29.34:6443: connect: connection refused" node="ip-172-31-29-34" Jan 21 23:36:24.148319 kubelet[2954]: I0121 23:36:24.148125 2954 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-34" Jan 21 23:36:24.149153 kubelet[2954]: E0121 23:36:24.149077 2954 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.34:6443/api/v1/nodes\": dial tcp 172.31.29.34:6443: connect: connection refused" node="ip-172-31-29-34" Jan 21 23:36:24.196078 containerd[2003]: time="2026-01-21T23:36:24.196014319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-34,Uid:f9ada2693492e169b86d8f2f719b2ec6,Namespace:kube-system,Attempt:0,}" Jan 21 23:36:24.217284 containerd[2003]: time="2026-01-21T23:36:24.217223069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-34,Uid:3325061b5e1189b4b573cf8b3ef3a10c,Namespace:kube-system,Attempt:0,}" Jan 21 23:36:24.240365 containerd[2003]: time="2026-01-21T23:36:24.240295615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-34,Uid:769d41b131da27c3c5c292e8eb54c691,Namespace:kube-system,Attempt:0,}" Jan 21 23:36:24.252207 containerd[2003]: time="2026-01-21T23:36:24.252073711Z" level=info msg="connecting to shim 030bd281891d8ed49212916b077504af9e537c10a66bcf0ba1ef460e8956eae0" address="unix:///run/containerd/s/30a18bfc4c7699d98854c4d7eb69fed12d3d04c16403cc707bc6918c9ff74059" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:36:24.317232 kubelet[2954]: E0121 23:36:24.316082 2954 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-34?timeout=10s\": dial tcp 172.31.29.34:6443: connect: connection refused" interval="800ms" Jan 21 23:36:24.321460 containerd[2003]: time="2026-01-21T23:36:24.321134612Z" level=info msg="connecting to shim 52dfb9dabd51463060684f18ade5312bee43ddcba2a5cce8cb290de86c6536c3" address="unix:///run/containerd/s/f349b7ca680b2756058b268f7f5ab12ec86837c7ca11d0d22fbaa8f9558df82c" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:36:24.341354 systemd[1]: Started cri-containerd-030bd281891d8ed49212916b077504af9e537c10a66bcf0ba1ef460e8956eae0.scope - libcontainer container 030bd281891d8ed49212916b077504af9e537c10a66bcf0ba1ef460e8956eae0. Jan 21 23:36:24.372769 containerd[2003]: time="2026-01-21T23:36:24.372671199Z" level=info msg="connecting to shim 965708b0cfd4fca9b005d9d6cd704567f1c96295f682cefc5bb7ea7b20f01759" address="unix:///run/containerd/s/9b9ae58530d69ecf2cfaa39611d145064b282621e1c1205e011bf94b548f0e29" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:36:24.387000 audit: BPF prog-id=90 op=LOAD Jan 21 23:36:24.389000 audit: BPF prog-id=91 op=LOAD Jan 21 23:36:24.389000 audit[3006]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2994 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.389000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033306264323831383931643865643439323132393136623037373530 Jan 21 23:36:24.389000 audit: BPF prog-id=91 op=UNLOAD Jan 21 23:36:24.389000 audit[3006]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2994 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.389000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033306264323831383931643865643439323132393136623037373530 Jan 21 23:36:24.389000 audit: BPF prog-id=92 op=LOAD Jan 21 23:36:24.389000 audit[3006]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2994 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.389000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033306264323831383931643865643439323132393136623037373530 Jan 21 23:36:24.390000 audit: BPF prog-id=93 op=LOAD Jan 21 23:36:24.390000 audit[3006]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2994 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.390000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033306264323831383931643865643439323132393136623037373530 Jan 21 23:36:24.390000 audit: BPF prog-id=93 op=UNLOAD Jan 21 23:36:24.390000 audit[3006]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2994 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.390000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033306264323831383931643865643439323132393136623037373530 Jan 21 23:36:24.391000 audit: BPF prog-id=92 op=UNLOAD Jan 21 23:36:24.391000 audit[3006]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2994 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.391000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033306264323831383931643865643439323132393136623037373530 Jan 21 23:36:24.391000 audit: BPF prog-id=94 op=LOAD Jan 21 23:36:24.391000 audit[3006]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2994 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.391000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033306264323831383931643865643439323132393136623037373530 Jan 21 23:36:24.416371 systemd[1]: Started cri-containerd-52dfb9dabd51463060684f18ade5312bee43ddcba2a5cce8cb290de86c6536c3.scope - libcontainer container 52dfb9dabd51463060684f18ade5312bee43ddcba2a5cce8cb290de86c6536c3. Jan 21 23:36:24.451453 systemd[1]: Started cri-containerd-965708b0cfd4fca9b005d9d6cd704567f1c96295f682cefc5bb7ea7b20f01759.scope - libcontainer container 965708b0cfd4fca9b005d9d6cd704567f1c96295f682cefc5bb7ea7b20f01759. Jan 21 23:36:24.468000 audit: BPF prog-id=95 op=LOAD Jan 21 23:36:24.471000 audit: BPF prog-id=96 op=LOAD Jan 21 23:36:24.471000 audit[3057]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3022 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532646662396461626435313436333036303638346631386164653533 Jan 21 23:36:24.472000 audit: BPF prog-id=96 op=UNLOAD Jan 21 23:36:24.472000 audit[3057]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3022 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532646662396461626435313436333036303638346631386164653533 Jan 21 23:36:24.472000 audit: BPF prog-id=97 op=LOAD Jan 21 23:36:24.472000 audit[3057]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3022 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532646662396461626435313436333036303638346631386164653533 Jan 21 23:36:24.472000 audit: BPF prog-id=98 op=LOAD Jan 21 23:36:24.472000 audit[3057]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3022 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532646662396461626435313436333036303638346631386164653533 Jan 21 23:36:24.472000 audit: BPF prog-id=98 op=UNLOAD Jan 21 23:36:24.472000 audit[3057]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3022 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532646662396461626435313436333036303638346631386164653533 Jan 21 23:36:24.472000 audit: BPF prog-id=97 op=UNLOAD Jan 21 23:36:24.472000 audit[3057]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3022 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532646662396461626435313436333036303638346631386164653533 Jan 21 23:36:24.473000 audit: BPF prog-id=99 op=LOAD Jan 21 23:36:24.473000 audit[3057]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3022 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532646662396461626435313436333036303638346631386164653533 Jan 21 23:36:24.495744 containerd[2003]: time="2026-01-21T23:36:24.494808860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-34,Uid:f9ada2693492e169b86d8f2f719b2ec6,Namespace:kube-system,Attempt:0,} returns sandbox id \"030bd281891d8ed49212916b077504af9e537c10a66bcf0ba1ef460e8956eae0\"" Jan 21 23:36:24.498752 kubelet[2954]: W0121 23:36:24.498656 2954 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-34&limit=500&resourceVersion=0": dial tcp 172.31.29.34:6443: connect: connection refused Jan 21 23:36:24.499501 kubelet[2954]: E0121 23:36:24.498761 2954 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.29.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-34&limit=500&resourceVersion=0\": dial tcp 172.31.29.34:6443: connect: connection refused" logger="UnhandledError" Jan 21 23:36:24.508211 containerd[2003]: time="2026-01-21T23:36:24.508093186Z" level=info msg="CreateContainer within sandbox \"030bd281891d8ed49212916b077504af9e537c10a66bcf0ba1ef460e8956eae0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 21 23:36:24.510000 audit: BPF prog-id=100 op=LOAD Jan 21 23:36:24.512000 audit: BPF prog-id=101 op=LOAD Jan 21 23:36:24.512000 audit[3079]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3046 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936353730386230636664346663613962303035643964366364373034 Jan 21 23:36:24.513000 audit: BPF prog-id=101 op=UNLOAD Jan 21 23:36:24.513000 audit[3079]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3046 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.513000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936353730386230636664346663613962303035643964366364373034 Jan 21 23:36:24.514000 audit: BPF prog-id=102 op=LOAD Jan 21 23:36:24.514000 audit[3079]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3046 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936353730386230636664346663613962303035643964366364373034 Jan 21 23:36:24.514000 audit: BPF prog-id=103 op=LOAD Jan 21 23:36:24.514000 audit[3079]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3046 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936353730386230636664346663613962303035643964366364373034 Jan 21 23:36:24.514000 audit: BPF prog-id=103 op=UNLOAD Jan 21 23:36:24.514000 audit[3079]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3046 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936353730386230636664346663613962303035643964366364373034 Jan 21 23:36:24.515000 audit: BPF prog-id=102 op=UNLOAD Jan 21 23:36:24.515000 audit[3079]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3046 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936353730386230636664346663613962303035643964366364373034 Jan 21 23:36:24.515000 audit: BPF prog-id=104 op=LOAD Jan 21 23:36:24.515000 audit[3079]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3046 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936353730386230636664346663613962303035643964366364373034 Jan 21 23:36:24.541225 containerd[2003]: time="2026-01-21T23:36:24.541158102Z" level=info msg="Container 988ad88bc69c4fa9fdeee2b632e10dec8e12e2cdb9fbcc7ac3e7ee0a97996b55: CDI devices from CRI Config.CDIDevices: []" Jan 21 23:36:24.555277 kubelet[2954]: I0121 23:36:24.554760 2954 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-34" Jan 21 23:36:24.557373 kubelet[2954]: E0121 23:36:24.557289 2954 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.34:6443/api/v1/nodes\": dial tcp 172.31.29.34:6443: connect: connection refused" node="ip-172-31-29-34" Jan 21 23:36:24.574630 containerd[2003]: time="2026-01-21T23:36:24.574521824Z" level=info msg="CreateContainer within sandbox \"030bd281891d8ed49212916b077504af9e537c10a66bcf0ba1ef460e8956eae0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"988ad88bc69c4fa9fdeee2b632e10dec8e12e2cdb9fbcc7ac3e7ee0a97996b55\"" Jan 21 23:36:24.576376 containerd[2003]: time="2026-01-21T23:36:24.576312648Z" level=info msg="StartContainer for \"988ad88bc69c4fa9fdeee2b632e10dec8e12e2cdb9fbcc7ac3e7ee0a97996b55\"" Jan 21 23:36:24.581303 containerd[2003]: time="2026-01-21T23:36:24.580904335Z" level=info msg="connecting to shim 988ad88bc69c4fa9fdeee2b632e10dec8e12e2cdb9fbcc7ac3e7ee0a97996b55" address="unix:///run/containerd/s/30a18bfc4c7699d98854c4d7eb69fed12d3d04c16403cc707bc6918c9ff74059" protocol=ttrpc version=3 Jan 21 23:36:24.586734 containerd[2003]: time="2026-01-21T23:36:24.586680321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-34,Uid:3325061b5e1189b4b573cf8b3ef3a10c,Namespace:kube-system,Attempt:0,} returns sandbox id \"52dfb9dabd51463060684f18ade5312bee43ddcba2a5cce8cb290de86c6536c3\"" Jan 21 23:36:24.595304 containerd[2003]: time="2026-01-21T23:36:24.595225147Z" level=info msg="CreateContainer within sandbox \"52dfb9dabd51463060684f18ade5312bee43ddcba2a5cce8cb290de86c6536c3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 21 23:36:24.601119 containerd[2003]: time="2026-01-21T23:36:24.601060204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-34,Uid:769d41b131da27c3c5c292e8eb54c691,Namespace:kube-system,Attempt:0,} returns sandbox id \"965708b0cfd4fca9b005d9d6cd704567f1c96295f682cefc5bb7ea7b20f01759\"" Jan 21 23:36:24.620964 containerd[2003]: time="2026-01-21T23:36:24.620920361Z" level=info msg="Container 32d0210c4aa4c15376748dbc91a54e635f775ac90800eeee83eebd9002fd01ad: CDI devices from CRI Config.CDIDevices: []" Jan 21 23:36:24.626365 systemd[1]: Started cri-containerd-988ad88bc69c4fa9fdeee2b632e10dec8e12e2cdb9fbcc7ac3e7ee0a97996b55.scope - libcontainer container 988ad88bc69c4fa9fdeee2b632e10dec8e12e2cdb9fbcc7ac3e7ee0a97996b55. Jan 21 23:36:24.639300 containerd[2003]: time="2026-01-21T23:36:24.639194563Z" level=info msg="CreateContainer within sandbox \"965708b0cfd4fca9b005d9d6cd704567f1c96295f682cefc5bb7ea7b20f01759\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 21 23:36:24.654263 kubelet[2954]: W0121 23:36:24.654099 2954 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.34:6443: connect: connection refused Jan 21 23:36:24.654954 kubelet[2954]: E0121 23:36:24.654836 2954 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.29.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.34:6443: connect: connection refused" logger="UnhandledError" Jan 21 23:36:24.662000 audit: BPF prog-id=105 op=LOAD Jan 21 23:36:24.664671 containerd[2003]: time="2026-01-21T23:36:24.664613232Z" level=info msg="CreateContainer within sandbox \"52dfb9dabd51463060684f18ade5312bee43ddcba2a5cce8cb290de86c6536c3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"32d0210c4aa4c15376748dbc91a54e635f775ac90800eeee83eebd9002fd01ad\"" Jan 21 23:36:24.666034 containerd[2003]: time="2026-01-21T23:36:24.665894371Z" level=info msg="StartContainer for \"32d0210c4aa4c15376748dbc91a54e635f775ac90800eeee83eebd9002fd01ad\"" Jan 21 23:36:24.665000 audit: BPF prog-id=106 op=LOAD Jan 21 23:36:24.665000 audit[3125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e180 a2=98 a3=0 items=0 ppid=2994 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938386164383862633639633466613966646565653262363332653130 Jan 21 23:36:24.666000 audit: BPF prog-id=106 op=UNLOAD Jan 21 23:36:24.666000 audit[3125]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2994 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938386164383862633639633466613966646565653262363332653130 Jan 21 23:36:24.666000 audit: BPF prog-id=107 op=LOAD Jan 21 23:36:24.666000 audit[3125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e3e8 a2=98 a3=0 items=0 ppid=2994 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938386164383862633639633466613966646565653262363332653130 Jan 21 23:36:24.667000 audit: BPF prog-id=108 op=LOAD Jan 21 23:36:24.667000 audit[3125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400017e168 a2=98 a3=0 items=0 ppid=2994 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938386164383862633639633466613966646565653262363332653130 Jan 21 23:36:24.668000 audit: BPF prog-id=108 op=UNLOAD Jan 21 23:36:24.668000 audit[3125]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2994 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938386164383862633639633466613966646565653262363332653130 Jan 21 23:36:24.668000 audit: BPF prog-id=107 op=UNLOAD Jan 21 23:36:24.668000 audit[3125]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2994 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938386164383862633639633466613966646565653262363332653130 Jan 21 23:36:24.668000 audit: BPF prog-id=109 op=LOAD Jan 21 23:36:24.668000 audit[3125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e648 a2=98 a3=0 items=0 ppid=2994 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938386164383862633639633466613966646565653262363332653130 Jan 21 23:36:24.674017 containerd[2003]: time="2026-01-21T23:36:24.673053086Z" level=info msg="connecting to shim 32d0210c4aa4c15376748dbc91a54e635f775ac90800eeee83eebd9002fd01ad" address="unix:///run/containerd/s/f349b7ca680b2756058b268f7f5ab12ec86837c7ca11d0d22fbaa8f9558df82c" protocol=ttrpc version=3 Jan 21 23:36:24.688148 containerd[2003]: time="2026-01-21T23:36:24.688093754Z" level=info msg="Container 737fa5a7f760b0c92a2c65bdb335b9ef06338f777e1733e2237ae15b86bd29fa: CDI devices from CRI Config.CDIDevices: []" Jan 21 23:36:24.725015 containerd[2003]: time="2026-01-21T23:36:24.724910236Z" level=info msg="CreateContainer within sandbox \"965708b0cfd4fca9b005d9d6cd704567f1c96295f682cefc5bb7ea7b20f01759\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"737fa5a7f760b0c92a2c65bdb335b9ef06338f777e1733e2237ae15b86bd29fa\"" Jan 21 23:36:24.725391 systemd[1]: Started cri-containerd-32d0210c4aa4c15376748dbc91a54e635f775ac90800eeee83eebd9002fd01ad.scope - libcontainer container 32d0210c4aa4c15376748dbc91a54e635f775ac90800eeee83eebd9002fd01ad. Jan 21 23:36:24.727288 containerd[2003]: time="2026-01-21T23:36:24.726918488Z" level=info msg="StartContainer for \"737fa5a7f760b0c92a2c65bdb335b9ef06338f777e1733e2237ae15b86bd29fa\"" Jan 21 23:36:24.731139 containerd[2003]: time="2026-01-21T23:36:24.731024813Z" level=info msg="connecting to shim 737fa5a7f760b0c92a2c65bdb335b9ef06338f777e1733e2237ae15b86bd29fa" address="unix:///run/containerd/s/9b9ae58530d69ecf2cfaa39611d145064b282621e1c1205e011bf94b548f0e29" protocol=ttrpc version=3 Jan 21 23:36:24.778648 containerd[2003]: time="2026-01-21T23:36:24.778441821Z" level=info msg="StartContainer for \"988ad88bc69c4fa9fdeee2b632e10dec8e12e2cdb9fbcc7ac3e7ee0a97996b55\" returns successfully" Jan 21 23:36:24.780000 audit: BPF prog-id=110 op=LOAD Jan 21 23:36:24.782000 audit: BPF prog-id=111 op=LOAD Jan 21 23:36:24.782000 audit[3147]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=3022 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.782000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332643032313063346161346331353337363734386462633931613534 Jan 21 23:36:24.784000 audit: BPF prog-id=111 op=UNLOAD Jan 21 23:36:24.784000 audit[3147]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3022 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332643032313063346161346331353337363734386462633931613534 Jan 21 23:36:24.786000 audit: BPF prog-id=112 op=LOAD Jan 21 23:36:24.786000 audit[3147]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3022 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332643032313063346161346331353337363734386462633931613534 Jan 21 23:36:24.786000 audit: BPF prog-id=113 op=LOAD Jan 21 23:36:24.786000 audit[3147]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3022 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332643032313063346161346331353337363734386462633931613534 Jan 21 23:36:24.786000 audit: BPF prog-id=113 op=UNLOAD Jan 21 23:36:24.786000 audit[3147]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3022 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332643032313063346161346331353337363734386462633931613534 Jan 21 23:36:24.786000 audit: BPF prog-id=112 op=UNLOAD Jan 21 23:36:24.786000 audit[3147]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3022 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332643032313063346161346331353337363734386462633931613534 Jan 21 23:36:24.786000 audit: BPF prog-id=114 op=LOAD Jan 21 23:36:24.786000 audit[3147]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3022 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332643032313063346161346331353337363734386462633931613534 Jan 21 23:36:24.805496 kubelet[2954]: E0121 23:36:24.805431 2954 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-34\" not found" node="ip-172-31-29-34" Jan 21 23:36:24.825917 systemd[1]: Started cri-containerd-737fa5a7f760b0c92a2c65bdb335b9ef06338f777e1733e2237ae15b86bd29fa.scope - libcontainer container 737fa5a7f760b0c92a2c65bdb335b9ef06338f777e1733e2237ae15b86bd29fa. Jan 21 23:36:24.874703 kubelet[2954]: W0121 23:36:24.874611 2954 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.34:6443: connect: connection refused Jan 21 23:36:24.874859 kubelet[2954]: E0121 23:36:24.874715 2954 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.29.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.34:6443: connect: connection refused" logger="UnhandledError" Jan 21 23:36:24.889408 containerd[2003]: time="2026-01-21T23:36:24.889240690Z" level=info msg="StartContainer for \"32d0210c4aa4c15376748dbc91a54e635f775ac90800eeee83eebd9002fd01ad\" returns successfully" Jan 21 23:36:24.904000 audit: BPF prog-id=115 op=LOAD Jan 21 23:36:24.906000 audit: BPF prog-id=116 op=LOAD Jan 21 23:36:24.906000 audit[3162]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=3046 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.906000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733376661356137663736306230633932613263363562646233333562 Jan 21 23:36:24.907000 audit: BPF prog-id=116 op=UNLOAD Jan 21 23:36:24.907000 audit[3162]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3046 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733376661356137663736306230633932613263363562646233333562 Jan 21 23:36:24.907000 audit: BPF prog-id=117 op=LOAD Jan 21 23:36:24.907000 audit[3162]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=3046 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733376661356137663736306230633932613263363562646233333562 Jan 21 23:36:24.907000 audit: BPF prog-id=118 op=LOAD Jan 21 23:36:24.907000 audit[3162]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=3046 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733376661356137663736306230633932613263363562646233333562 Jan 21 23:36:24.907000 audit: BPF prog-id=118 op=UNLOAD Jan 21 23:36:24.907000 audit[3162]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3046 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733376661356137663736306230633932613263363562646233333562 Jan 21 23:36:24.907000 audit: BPF prog-id=117 op=UNLOAD Jan 21 23:36:24.907000 audit[3162]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3046 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733376661356137663736306230633932613263363562646233333562 Jan 21 23:36:24.908000 audit: BPF prog-id=119 op=LOAD Jan 21 23:36:24.908000 audit[3162]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=3046 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:24.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733376661356137663736306230633932613263363562646233333562 Jan 21 23:36:24.935802 kubelet[2954]: W0121 23:36:24.935612 2954 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.29.34:6443: connect: connection refused Jan 21 23:36:24.935802 kubelet[2954]: E0121 23:36:24.935718 2954 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.29.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.34:6443: connect: connection refused" logger="UnhandledError" Jan 21 23:36:25.040424 containerd[2003]: time="2026-01-21T23:36:25.040350686Z" level=info msg="StartContainer for \"737fa5a7f760b0c92a2c65bdb335b9ef06338f777e1733e2237ae15b86bd29fa\" returns successfully" Jan 21 23:36:25.360338 kubelet[2954]: I0121 23:36:25.360287 2954 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-34" Jan 21 23:36:25.834128 kubelet[2954]: E0121 23:36:25.834067 2954 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-34\" not found" node="ip-172-31-29-34" Jan 21 23:36:25.838453 kubelet[2954]: E0121 23:36:25.838399 2954 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-34\" not found" node="ip-172-31-29-34" Jan 21 23:36:25.839870 kubelet[2954]: E0121 23:36:25.839819 2954 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-34\" not found" node="ip-172-31-29-34" Jan 21 23:36:26.685895 update_engine[1972]: I20260121 23:36:26.684020 1972 update_attempter.cc:509] Updating boot flags... Jan 21 23:36:26.851184 kubelet[2954]: E0121 23:36:26.851124 2954 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-34\" not found" node="ip-172-31-29-34" Jan 21 23:36:26.853678 kubelet[2954]: E0121 23:36:26.852647 2954 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-34\" not found" node="ip-172-31-29-34" Jan 21 23:36:27.853017 kubelet[2954]: E0121 23:36:27.852922 2954 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-34\" not found" node="ip-172-31-29-34" Jan 21 23:36:29.883964 kubelet[2954]: E0121 23:36:29.883895 2954 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-34\" not found" node="ip-172-31-29-34" Jan 21 23:36:29.942927 kubelet[2954]: I0121 23:36:29.942833 2954 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-34" Jan 21 23:36:29.982401 kubelet[2954]: E0121 23:36:29.981314 2954 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-29-34.188ce338168b04ba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-34,UID:ip-172-31-29-34,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-34,},FirstTimestamp:2026-01-21 23:36:23.685645498 +0000 UTC m=+1.827190038,LastTimestamp:2026-01-21 23:36:23.685645498 +0000 UTC m=+1.827190038,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-34,}" Jan 21 23:36:30.010067 kubelet[2954]: I0121 23:36:30.009766 2954 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-34" Jan 21 23:36:30.048316 kubelet[2954]: E0121 23:36:30.048262 2954 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-29-34\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-29-34" Jan 21 23:36:30.049135 kubelet[2954]: I0121 23:36:30.048711 2954 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-34" Jan 21 23:36:30.058010 kubelet[2954]: E0121 23:36:30.057911 2954 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-29-34\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-29-34" Jan 21 23:36:30.058010 kubelet[2954]: I0121 23:36:30.057960 2954 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-34" Jan 21 23:36:30.064998 kubelet[2954]: E0121 23:36:30.064926 2954 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-34\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-29-34" Jan 21 23:36:30.408049 kubelet[2954]: I0121 23:36:30.407971 2954 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-34" Jan 21 23:36:30.411653 kubelet[2954]: E0121 23:36:30.411595 2954 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-34\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-29-34" Jan 21 23:36:30.691435 kubelet[2954]: I0121 23:36:30.690959 2954 apiserver.go:52] "Watching apiserver" Jan 21 23:36:30.710169 kubelet[2954]: I0121 23:36:30.710090 2954 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 21 23:36:30.757934 kubelet[2954]: I0121 23:36:30.757882 2954 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-34" Jan 21 23:36:32.326185 systemd[1]: Reload requested from client PID 3408 ('systemctl') (unit session-7.scope)... Jan 21 23:36:32.326217 systemd[1]: Reloading... Jan 21 23:36:32.583039 zram_generator::config[3461]: No configuration found. Jan 21 23:36:33.144606 systemd[1]: Reloading finished in 817 ms. Jan 21 23:36:33.190128 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 23:36:33.211610 systemd[1]: kubelet.service: Deactivated successfully. Jan 21 23:36:33.214110 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 23:36:33.220592 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 21 23:36:33.220672 kernel: audit: type=1131 audit(1769038593.213:412): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:36:33.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:36:33.214248 systemd[1]: kubelet.service: Consumed 2.682s CPU time, 127.8M memory peak. Jan 21 23:36:33.220525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 23:36:33.220000 audit: BPF prog-id=120 op=LOAD Jan 21 23:36:33.223328 kernel: audit: type=1334 audit(1769038593.220:413): prog-id=120 op=LOAD Jan 21 23:36:33.220000 audit: BPF prog-id=75 op=UNLOAD Jan 21 23:36:33.225280 kernel: audit: type=1334 audit(1769038593.220:414): prog-id=75 op=UNLOAD Jan 21 23:36:33.225000 audit: BPF prog-id=121 op=LOAD Jan 21 23:36:33.229039 kernel: audit: type=1334 audit(1769038593.225:415): prog-id=121 op=LOAD Jan 21 23:36:33.228000 audit: BPF prog-id=122 op=LOAD Jan 21 23:36:33.230999 kernel: audit: type=1334 audit(1769038593.228:416): prog-id=122 op=LOAD Jan 21 23:36:33.228000 audit: BPF prog-id=76 op=UNLOAD Jan 21 23:36:33.228000 audit: BPF prog-id=77 op=UNLOAD Jan 21 23:36:33.234582 kernel: audit: type=1334 audit(1769038593.228:417): prog-id=76 op=UNLOAD Jan 21 23:36:33.234740 kernel: audit: type=1334 audit(1769038593.228:418): prog-id=77 op=UNLOAD Jan 21 23:36:33.255735 kernel: audit: type=1334 audit(1769038593.250:419): prog-id=123 op=LOAD Jan 21 23:36:33.255852 kernel: audit: type=1334 audit(1769038593.250:420): prog-id=86 op=UNLOAD Jan 21 23:36:33.250000 audit: BPF prog-id=123 op=LOAD Jan 21 23:36:33.257942 kernel: audit: type=1334 audit(1769038593.254:421): prog-id=124 op=LOAD Jan 21 23:36:33.250000 audit: BPF prog-id=86 op=UNLOAD Jan 21 23:36:33.254000 audit: BPF prog-id=124 op=LOAD Jan 21 23:36:33.254000 audit: BPF prog-id=87 op=UNLOAD Jan 21 23:36:33.254000 audit: BPF prog-id=125 op=LOAD Jan 21 23:36:33.255000 audit: BPF prog-id=126 op=LOAD Jan 21 23:36:33.255000 audit: BPF prog-id=88 op=UNLOAD Jan 21 23:36:33.255000 audit: BPF prog-id=89 op=UNLOAD Jan 21 23:36:33.257000 audit: BPF prog-id=127 op=LOAD Jan 21 23:36:33.257000 audit: BPF prog-id=78 op=UNLOAD Jan 21 23:36:33.260000 audit: BPF prog-id=128 op=LOAD Jan 21 23:36:33.261000 audit: BPF prog-id=70 op=UNLOAD Jan 21 23:36:33.261000 audit: BPF prog-id=129 op=LOAD Jan 21 23:36:33.261000 audit: BPF prog-id=130 op=LOAD Jan 21 23:36:33.261000 audit: BPF prog-id=71 op=UNLOAD Jan 21 23:36:33.261000 audit: BPF prog-id=72 op=UNLOAD Jan 21 23:36:33.262000 audit: BPF prog-id=131 op=LOAD Jan 21 23:36:33.262000 audit: BPF prog-id=79 op=UNLOAD Jan 21 23:36:33.263000 audit: BPF prog-id=132 op=LOAD Jan 21 23:36:33.263000 audit: BPF prog-id=133 op=LOAD Jan 21 23:36:33.263000 audit: BPF prog-id=80 op=UNLOAD Jan 21 23:36:33.263000 audit: BPF prog-id=81 op=UNLOAD Jan 21 23:36:33.263000 audit: BPF prog-id=134 op=LOAD Jan 21 23:36:33.264000 audit: BPF prog-id=135 op=LOAD Jan 21 23:36:33.264000 audit: BPF prog-id=73 op=UNLOAD Jan 21 23:36:33.264000 audit: BPF prog-id=74 op=UNLOAD Jan 21 23:36:33.265000 audit: BPF prog-id=136 op=LOAD Jan 21 23:36:33.265000 audit: BPF prog-id=82 op=UNLOAD Jan 21 23:36:33.265000 audit: BPF prog-id=137 op=LOAD Jan 21 23:36:33.265000 audit: BPF prog-id=138 op=LOAD Jan 21 23:36:33.265000 audit: BPF prog-id=83 op=UNLOAD Jan 21 23:36:33.266000 audit: BPF prog-id=84 op=UNLOAD Jan 21 23:36:33.267000 audit: BPF prog-id=139 op=LOAD Jan 21 23:36:33.267000 audit: BPF prog-id=85 op=UNLOAD Jan 21 23:36:33.666743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 23:36:33.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:36:33.683721 (kubelet)[3515]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 21 23:36:33.795420 kubelet[3515]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 23:36:33.798035 kubelet[3515]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 21 23:36:33.798035 kubelet[3515]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 23:36:33.798035 kubelet[3515]: I0121 23:36:33.796217 3515 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 23:36:33.814224 kubelet[3515]: I0121 23:36:33.814165 3515 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 21 23:36:33.814449 kubelet[3515]: I0121 23:36:33.814423 3515 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 23:36:33.816181 kubelet[3515]: I0121 23:36:33.816112 3515 server.go:954] "Client rotation is on, will bootstrap in background" Jan 21 23:36:33.825659 kubelet[3515]: I0121 23:36:33.825613 3515 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 21 23:36:33.833161 kubelet[3515]: I0121 23:36:33.833106 3515 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 21 23:36:33.848997 kubelet[3515]: I0121 23:36:33.848942 3515 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 23:36:33.856452 kubelet[3515]: I0121 23:36:33.856405 3515 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 21 23:36:33.857540 kubelet[3515]: I0121 23:36:33.857200 3515 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 23:36:33.858082 kubelet[3515]: I0121 23:36:33.857729 3515 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-34","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 23:36:33.858500 kubelet[3515]: I0121 23:36:33.858465 3515 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 23:36:33.858642 kubelet[3515]: I0121 23:36:33.858621 3515 container_manager_linux.go:304] "Creating device plugin manager" Jan 21 23:36:33.858840 kubelet[3515]: I0121 23:36:33.858818 3515 state_mem.go:36] "Initialized new in-memory state store" Jan 21 23:36:33.859279 kubelet[3515]: I0121 23:36:33.859251 3515 kubelet.go:446] "Attempting to sync node with API server" Jan 21 23:36:33.859494 kubelet[3515]: I0121 23:36:33.859470 3515 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 23:36:33.860232 kubelet[3515]: I0121 23:36:33.860174 3515 kubelet.go:352] "Adding apiserver pod source" Jan 21 23:36:33.860232 kubelet[3515]: I0121 23:36:33.860227 3515 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 23:36:33.866019 kubelet[3515]: I0121 23:36:33.865906 3515 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 21 23:36:33.868056 kubelet[3515]: I0121 23:36:33.867969 3515 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 21 23:36:33.871029 kubelet[3515]: I0121 23:36:33.868841 3515 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 21 23:36:33.871029 kubelet[3515]: I0121 23:36:33.868915 3515 server.go:1287] "Started kubelet" Jan 21 23:36:33.887964 kubelet[3515]: I0121 23:36:33.887758 3515 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 23:36:33.899048 kubelet[3515]: I0121 23:36:33.898425 3515 server.go:479] "Adding debug handlers to kubelet server" Jan 21 23:36:33.920025 kubelet[3515]: I0121 23:36:33.918631 3515 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 23:36:33.925699 kubelet[3515]: I0121 23:36:33.925636 3515 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 21 23:36:33.927235 kubelet[3515]: I0121 23:36:33.927168 3515 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 21 23:36:33.952148 kubelet[3515]: I0121 23:36:33.927550 3515 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 21 23:36:33.952148 kubelet[3515]: I0121 23:36:33.889562 3515 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 23:36:33.954099 kubelet[3515]: E0121 23:36:33.936561 3515 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-29-34\" not found" Jan 21 23:36:33.954615 kubelet[3515]: I0121 23:36:33.954394 3515 reconciler.go:26] "Reconciler: start to sync state" Jan 21 23:36:33.992309 kubelet[3515]: I0121 23:36:33.991219 3515 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 23:36:33.998350 kubelet[3515]: I0121 23:36:33.993515 3515 factory.go:221] Registration of the systemd container factory successfully Jan 21 23:36:33.998510 kubelet[3515]: I0121 23:36:33.998469 3515 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 21 23:36:34.013158 kubelet[3515]: E0121 23:36:34.013084 3515 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 21 23:36:34.023559 kubelet[3515]: I0121 23:36:34.023497 3515 factory.go:221] Registration of the containerd container factory successfully Jan 21 23:36:34.085479 kubelet[3515]: I0121 23:36:34.085395 3515 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 21 23:36:34.098606 kubelet[3515]: I0121 23:36:34.097422 3515 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 21 23:36:34.098606 kubelet[3515]: I0121 23:36:34.097500 3515 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 21 23:36:34.098606 kubelet[3515]: I0121 23:36:34.097538 3515 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 21 23:36:34.098606 kubelet[3515]: I0121 23:36:34.097553 3515 kubelet.go:2382] "Starting kubelet main sync loop" Jan 21 23:36:34.098606 kubelet[3515]: E0121 23:36:34.097640 3515 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 23:36:34.198514 kubelet[3515]: E0121 23:36:34.198160 3515 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 21 23:36:34.201410 kubelet[3515]: I0121 23:36:34.199646 3515 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 21 23:36:34.201410 kubelet[3515]: I0121 23:36:34.199920 3515 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 21 23:36:34.201410 kubelet[3515]: I0121 23:36:34.200012 3515 state_mem.go:36] "Initialized new in-memory state store" Jan 21 23:36:34.201410 kubelet[3515]: I0121 23:36:34.200838 3515 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 21 23:36:34.201410 kubelet[3515]: I0121 23:36:34.200874 3515 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 21 23:36:34.201410 kubelet[3515]: I0121 23:36:34.200915 3515 policy_none.go:49] "None policy: Start" Jan 21 23:36:34.201410 kubelet[3515]: I0121 23:36:34.200955 3515 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 21 23:36:34.201410 kubelet[3515]: I0121 23:36:34.201043 3515 state_mem.go:35] "Initializing new in-memory state store" Jan 21 23:36:34.201410 kubelet[3515]: I0121 23:36:34.201311 3515 state_mem.go:75] "Updated machine memory state" Jan 21 23:36:34.240061 kubelet[3515]: I0121 23:36:34.238825 3515 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 21 23:36:34.240189 kubelet[3515]: I0121 23:36:34.240101 3515 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 23:36:34.241170 kubelet[3515]: I0121 23:36:34.240142 3515 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 23:36:34.243863 kubelet[3515]: I0121 23:36:34.243048 3515 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 23:36:34.250101 kubelet[3515]: E0121 23:36:34.249835 3515 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 21 23:36:34.375820 kubelet[3515]: I0121 23:36:34.375674 3515 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-34" Jan 21 23:36:34.395513 kubelet[3515]: I0121 23:36:34.395458 3515 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-29-34" Jan 21 23:36:34.395659 kubelet[3515]: I0121 23:36:34.395637 3515 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-34" Jan 21 23:36:34.400281 kubelet[3515]: I0121 23:36:34.399601 3515 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-34" Jan 21 23:36:34.402219 kubelet[3515]: I0121 23:36:34.401225 3515 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-34" Jan 21 23:36:34.403099 kubelet[3515]: I0121 23:36:34.402791 3515 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-34" Jan 21 23:36:34.426499 kubelet[3515]: E0121 23:36:34.426236 3515 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-29-34\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-34" Jan 21 23:36:34.468132 kubelet[3515]: I0121 23:36:34.466600 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f9ada2693492e169b86d8f2f719b2ec6-ca-certs\") pod \"kube-apiserver-ip-172-31-29-34\" (UID: \"f9ada2693492e169b86d8f2f719b2ec6\") " pod="kube-system/kube-apiserver-ip-172-31-29-34" Jan 21 23:36:34.468132 kubelet[3515]: I0121 23:36:34.466672 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f9ada2693492e169b86d8f2f719b2ec6-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-34\" (UID: \"f9ada2693492e169b86d8f2f719b2ec6\") " pod="kube-system/kube-apiserver-ip-172-31-29-34" Jan 21 23:36:34.468132 kubelet[3515]: I0121 23:36:34.466714 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f9ada2693492e169b86d8f2f719b2ec6-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-34\" (UID: \"f9ada2693492e169b86d8f2f719b2ec6\") " pod="kube-system/kube-apiserver-ip-172-31-29-34" Jan 21 23:36:34.468132 kubelet[3515]: I0121 23:36:34.466756 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3325061b5e1189b4b573cf8b3ef3a10c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-34\" (UID: \"3325061b5e1189b4b573cf8b3ef3a10c\") " pod="kube-system/kube-controller-manager-ip-172-31-29-34" Jan 21 23:36:34.468132 kubelet[3515]: I0121 23:36:34.466794 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3325061b5e1189b4b573cf8b3ef3a10c-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-34\" (UID: \"3325061b5e1189b4b573cf8b3ef3a10c\") " pod="kube-system/kube-controller-manager-ip-172-31-29-34" Jan 21 23:36:34.468584 kubelet[3515]: I0121 23:36:34.466833 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3325061b5e1189b4b573cf8b3ef3a10c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-34\" (UID: \"3325061b5e1189b4b573cf8b3ef3a10c\") " pod="kube-system/kube-controller-manager-ip-172-31-29-34" Jan 21 23:36:34.468584 kubelet[3515]: I0121 23:36:34.466872 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3325061b5e1189b4b573cf8b3ef3a10c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-34\" (UID: \"3325061b5e1189b4b573cf8b3ef3a10c\") " pod="kube-system/kube-controller-manager-ip-172-31-29-34" Jan 21 23:36:34.468584 kubelet[3515]: I0121 23:36:34.466907 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3325061b5e1189b4b573cf8b3ef3a10c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-34\" (UID: \"3325061b5e1189b4b573cf8b3ef3a10c\") " pod="kube-system/kube-controller-manager-ip-172-31-29-34" Jan 21 23:36:34.468584 kubelet[3515]: I0121 23:36:34.466943 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/769d41b131da27c3c5c292e8eb54c691-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-34\" (UID: \"769d41b131da27c3c5c292e8eb54c691\") " pod="kube-system/kube-scheduler-ip-172-31-29-34" Jan 21 23:36:34.861855 kubelet[3515]: I0121 23:36:34.861487 3515 apiserver.go:52] "Watching apiserver" Jan 21 23:36:34.953270 kubelet[3515]: I0121 23:36:34.953183 3515 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 21 23:36:35.156074 kubelet[3515]: I0121 23:36:35.154722 3515 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-34" Jan 21 23:36:35.177680 kubelet[3515]: E0121 23:36:35.177581 3515 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-34\" already exists" pod="kube-system/kube-scheduler-ip-172-31-29-34" Jan 21 23:36:35.219038 kubelet[3515]: I0121 23:36:35.217948 3515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-34" podStartSLOduration=1.21792765 podStartE2EDuration="1.21792765s" podCreationTimestamp="2026-01-21 23:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 23:36:35.217410169 +0000 UTC m=+1.519878222" watchObservedRunningTime="2026-01-21 23:36:35.21792765 +0000 UTC m=+1.520395667" Jan 21 23:36:35.264722 kubelet[3515]: I0121 23:36:35.263804 3515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-34" podStartSLOduration=1.2637802329999999 podStartE2EDuration="1.263780233s" podCreationTimestamp="2026-01-21 23:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 23:36:35.263523249 +0000 UTC m=+1.565991302" watchObservedRunningTime="2026-01-21 23:36:35.263780233 +0000 UTC m=+1.566248262" Jan 21 23:36:35.264722 kubelet[3515]: I0121 23:36:35.264509 3515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-34" podStartSLOduration=5.264488994 podStartE2EDuration="5.264488994s" podCreationTimestamp="2026-01-21 23:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 23:36:35.23886909 +0000 UTC m=+1.541337143" watchObservedRunningTime="2026-01-21 23:36:35.264488994 +0000 UTC m=+1.566957023" Jan 21 23:36:36.361572 kubelet[3515]: I0121 23:36:36.361438 3515 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 21 23:36:36.363024 containerd[2003]: time="2026-01-21T23:36:36.362933268Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 21 23:36:36.364550 kubelet[3515]: I0121 23:36:36.363940 3515 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 21 23:36:37.459347 systemd[1]: Created slice kubepods-besteffort-pod3adf5735_3d70_4216_9f3a_f8a8b59d958e.slice - libcontainer container kubepods-besteffort-pod3adf5735_3d70_4216_9f3a_f8a8b59d958e.slice. Jan 21 23:36:37.488106 kubelet[3515]: I0121 23:36:37.488047 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ggsb\" (UniqueName: \"kubernetes.io/projected/3adf5735-3d70-4216-9f3a-f8a8b59d958e-kube-api-access-9ggsb\") pod \"kube-proxy-xgsbk\" (UID: \"3adf5735-3d70-4216-9f3a-f8a8b59d958e\") " pod="kube-system/kube-proxy-xgsbk" Jan 21 23:36:37.488789 kubelet[3515]: I0121 23:36:37.488118 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3adf5735-3d70-4216-9f3a-f8a8b59d958e-kube-proxy\") pod \"kube-proxy-xgsbk\" (UID: \"3adf5735-3d70-4216-9f3a-f8a8b59d958e\") " pod="kube-system/kube-proxy-xgsbk" Jan 21 23:36:37.488789 kubelet[3515]: I0121 23:36:37.488162 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3adf5735-3d70-4216-9f3a-f8a8b59d958e-xtables-lock\") pod \"kube-proxy-xgsbk\" (UID: \"3adf5735-3d70-4216-9f3a-f8a8b59d958e\") " pod="kube-system/kube-proxy-xgsbk" Jan 21 23:36:37.488789 kubelet[3515]: I0121 23:36:37.488196 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3adf5735-3d70-4216-9f3a-f8a8b59d958e-lib-modules\") pod \"kube-proxy-xgsbk\" (UID: \"3adf5735-3d70-4216-9f3a-f8a8b59d958e\") " pod="kube-system/kube-proxy-xgsbk" Jan 21 23:36:37.599825 systemd[1]: Created slice kubepods-besteffort-pod47636712_f986_472c_a585_43ab4c7900ae.slice - libcontainer container kubepods-besteffort-pod47636712_f986_472c_a585_43ab4c7900ae.slice. Jan 21 23:36:37.689473 kubelet[3515]: I0121 23:36:37.689271 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvxdh\" (UniqueName: \"kubernetes.io/projected/47636712-f986-472c-a585-43ab4c7900ae-kube-api-access-kvxdh\") pod \"tigera-operator-7dcd859c48-phb7p\" (UID: \"47636712-f986-472c-a585-43ab4c7900ae\") " pod="tigera-operator/tigera-operator-7dcd859c48-phb7p" Jan 21 23:36:37.689473 kubelet[3515]: I0121 23:36:37.689373 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/47636712-f986-472c-a585-43ab4c7900ae-var-lib-calico\") pod \"tigera-operator-7dcd859c48-phb7p\" (UID: \"47636712-f986-472c-a585-43ab4c7900ae\") " pod="tigera-operator/tigera-operator-7dcd859c48-phb7p" Jan 21 23:36:37.779879 containerd[2003]: time="2026-01-21T23:36:37.779601416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xgsbk,Uid:3adf5735-3d70-4216-9f3a-f8a8b59d958e,Namespace:kube-system,Attempt:0,}" Jan 21 23:36:37.840048 containerd[2003]: time="2026-01-21T23:36:37.839829251Z" level=info msg="connecting to shim 59ecf2cab25ae8a836f8248276297f05ba4a530232bb64cc5a8bef0473dec74f" address="unix:///run/containerd/s/1abe622ca6c957617c3bc27c46d45517a464d451177d42f54b035a80a9831d68" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:36:37.903368 systemd[1]: Started cri-containerd-59ecf2cab25ae8a836f8248276297f05ba4a530232bb64cc5a8bef0473dec74f.scope - libcontainer container 59ecf2cab25ae8a836f8248276297f05ba4a530232bb64cc5a8bef0473dec74f. Jan 21 23:36:37.911292 containerd[2003]: time="2026-01-21T23:36:37.911219375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-phb7p,Uid:47636712-f986-472c-a585-43ab4c7900ae,Namespace:tigera-operator,Attempt:0,}" Jan 21 23:36:37.928000 audit: BPF prog-id=140 op=LOAD Jan 21 23:36:37.929000 audit: BPF prog-id=141 op=LOAD Jan 21 23:36:37.929000 audit[3583]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3570 pid=3583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:37.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539656366326361623235616538613833366638323438323736323937 Jan 21 23:36:37.929000 audit: BPF prog-id=141 op=UNLOAD Jan 21 23:36:37.929000 audit[3583]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3570 pid=3583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:37.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539656366326361623235616538613833366638323438323736323937 Jan 21 23:36:37.929000 audit: BPF prog-id=142 op=LOAD Jan 21 23:36:37.929000 audit[3583]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3570 pid=3583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:37.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539656366326361623235616538613833366638323438323736323937 Jan 21 23:36:37.930000 audit: BPF prog-id=143 op=LOAD Jan 21 23:36:37.930000 audit[3583]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3570 pid=3583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:37.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539656366326361623235616538613833366638323438323736323937 Jan 21 23:36:37.930000 audit: BPF prog-id=143 op=UNLOAD Jan 21 23:36:37.930000 audit[3583]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3570 pid=3583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:37.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539656366326361623235616538613833366638323438323736323937 Jan 21 23:36:37.930000 audit: BPF prog-id=142 op=UNLOAD Jan 21 23:36:37.930000 audit[3583]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3570 pid=3583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:37.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539656366326361623235616538613833366638323438323736323937 Jan 21 23:36:37.930000 audit: BPF prog-id=144 op=LOAD Jan 21 23:36:37.930000 audit[3583]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3570 pid=3583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:37.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539656366326361623235616538613833366638323438323736323937 Jan 21 23:36:37.980713 containerd[2003]: time="2026-01-21T23:36:37.980643322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xgsbk,Uid:3adf5735-3d70-4216-9f3a-f8a8b59d958e,Namespace:kube-system,Attempt:0,} returns sandbox id \"59ecf2cab25ae8a836f8248276297f05ba4a530232bb64cc5a8bef0473dec74f\"" Jan 21 23:36:37.992285 containerd[2003]: time="2026-01-21T23:36:37.992197598Z" level=info msg="CreateContainer within sandbox \"59ecf2cab25ae8a836f8248276297f05ba4a530232bb64cc5a8bef0473dec74f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 21 23:36:37.998673 containerd[2003]: time="2026-01-21T23:36:37.998502076Z" level=info msg="connecting to shim e3afba8be03b972543baa3768dcace7e7c37850f8acca9fefeaf78b77116cfbb" address="unix:///run/containerd/s/1b3b29a97b9a30a659661a4fc940fe5f3b3bb4aff9c3c58cb845e01cc0635c3d" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:36:38.023593 containerd[2003]: time="2026-01-21T23:36:38.023527161Z" level=info msg="Container f6421442911047d295070923574c23619382770979d4cbceafd032112f891c3f: CDI devices from CRI Config.CDIDevices: []" Jan 21 23:36:38.048189 containerd[2003]: time="2026-01-21T23:36:38.046833470Z" level=info msg="CreateContainer within sandbox \"59ecf2cab25ae8a836f8248276297f05ba4a530232bb64cc5a8bef0473dec74f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f6421442911047d295070923574c23619382770979d4cbceafd032112f891c3f\"" Jan 21 23:36:38.049423 systemd[1]: Started cri-containerd-e3afba8be03b972543baa3768dcace7e7c37850f8acca9fefeaf78b77116cfbb.scope - libcontainer container e3afba8be03b972543baa3768dcace7e7c37850f8acca9fefeaf78b77116cfbb. Jan 21 23:36:38.054383 containerd[2003]: time="2026-01-21T23:36:38.054262890Z" level=info msg="StartContainer for \"f6421442911047d295070923574c23619382770979d4cbceafd032112f891c3f\"" Jan 21 23:36:38.060874 containerd[2003]: time="2026-01-21T23:36:38.060734480Z" level=info msg="connecting to shim f6421442911047d295070923574c23619382770979d4cbceafd032112f891c3f" address="unix:///run/containerd/s/1abe622ca6c957617c3bc27c46d45517a464d451177d42f54b035a80a9831d68" protocol=ttrpc version=3 Jan 21 23:36:38.096000 audit: BPF prog-id=145 op=LOAD Jan 21 23:36:38.098000 audit: BPF prog-id=146 op=LOAD Jan 21 23:36:38.098000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3617 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533616662613862653033623937323534336261613337363864636163 Jan 21 23:36:38.099000 audit: BPF prog-id=146 op=UNLOAD Jan 21 23:36:38.099000 audit[3628]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3617 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533616662613862653033623937323534336261613337363864636163 Jan 21 23:36:38.099000 audit: BPF prog-id=147 op=LOAD Jan 21 23:36:38.099000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3617 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533616662613862653033623937323534336261613337363864636163 Jan 21 23:36:38.099000 audit: BPF prog-id=148 op=LOAD Jan 21 23:36:38.099000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3617 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533616662613862653033623937323534336261613337363864636163 Jan 21 23:36:38.099000 audit: BPF prog-id=148 op=UNLOAD Jan 21 23:36:38.099000 audit[3628]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3617 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533616662613862653033623937323534336261613337363864636163 Jan 21 23:36:38.099000 audit: BPF prog-id=147 op=UNLOAD Jan 21 23:36:38.099000 audit[3628]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3617 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533616662613862653033623937323534336261613337363864636163 Jan 21 23:36:38.099000 audit: BPF prog-id=149 op=LOAD Jan 21 23:36:38.099000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3617 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533616662613862653033623937323534336261613337363864636163 Jan 21 23:36:38.122383 systemd[1]: Started cri-containerd-f6421442911047d295070923574c23619382770979d4cbceafd032112f891c3f.scope - libcontainer container f6421442911047d295070923574c23619382770979d4cbceafd032112f891c3f. Jan 21 23:36:38.204652 containerd[2003]: time="2026-01-21T23:36:38.204566577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-phb7p,Uid:47636712-f986-472c-a585-43ab4c7900ae,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e3afba8be03b972543baa3768dcace7e7c37850f8acca9fefeaf78b77116cfbb\"" Jan 21 23:36:38.213020 containerd[2003]: time="2026-01-21T23:36:38.211800242Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 21 23:36:38.245722 kernel: kauditd_printk_skb: 76 callbacks suppressed Jan 21 23:36:38.245893 kernel: audit: type=1334 audit(1769038598.242:470): prog-id=150 op=LOAD Jan 21 23:36:38.242000 audit: BPF prog-id=150 op=LOAD Jan 21 23:36:38.242000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3570 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.253086 kernel: audit: type=1300 audit(1769038598.242:470): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3570 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.253775 kernel: audit: type=1327 audit(1769038598.242:470): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636343231343432393131303437643239353037303932333537346332 Jan 21 23:36:38.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636343231343432393131303437643239353037303932333537346332 Jan 21 23:36:38.245000 audit: BPF prog-id=151 op=LOAD Jan 21 23:36:38.261263 kernel: audit: type=1334 audit(1769038598.245:471): prog-id=151 op=LOAD Jan 21 23:36:38.245000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3570 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.268808 kernel: audit: type=1300 audit(1769038598.245:471): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3570 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.245000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636343231343432393131303437643239353037303932333537346332 Jan 21 23:36:38.275475 kernel: audit: type=1327 audit(1769038598.245:471): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636343231343432393131303437643239353037303932333537346332 Jan 21 23:36:38.245000 audit: BPF prog-id=151 op=UNLOAD Jan 21 23:36:38.278185 kernel: audit: type=1334 audit(1769038598.245:472): prog-id=151 op=UNLOAD Jan 21 23:36:38.245000 audit[3647]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3570 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.287721 kernel: audit: type=1300 audit(1769038598.245:472): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3570 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.288238 kernel: audit: type=1327 audit(1769038598.245:472): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636343231343432393131303437643239353037303932333537346332 Jan 21 23:36:38.245000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636343231343432393131303437643239353037303932333537346332 Jan 21 23:36:38.245000 audit: BPF prog-id=150 op=UNLOAD Jan 21 23:36:38.296042 kernel: audit: type=1334 audit(1769038598.245:473): prog-id=150 op=UNLOAD Jan 21 23:36:38.245000 audit[3647]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3570 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.245000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636343231343432393131303437643239353037303932333537346332 Jan 21 23:36:38.245000 audit: BPF prog-id=152 op=LOAD Jan 21 23:36:38.245000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3570 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.245000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636343231343432393131303437643239353037303932333537346332 Jan 21 23:36:38.337620 containerd[2003]: time="2026-01-21T23:36:38.336878376Z" level=info msg="StartContainer for \"f6421442911047d295070923574c23619382770979d4cbceafd032112f891c3f\" returns successfully" Jan 21 23:36:38.619000 audit[3713]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3713 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:38.619000 audit[3713]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff356d9f0 a2=0 a3=1 items=0 ppid=3660 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.619000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 21 23:36:38.629000 audit[3716]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3716 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:38.629000 audit[3716]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffccb26db0 a2=0 a3=1 items=0 ppid=3660 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.629000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 21 23:36:38.633000 audit[3714]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3714 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:38.633000 audit[3714]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe042eb90 a2=0 a3=1 items=0 ppid=3660 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.633000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 21 23:36:38.634000 audit[3717]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3717 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:38.634000 audit[3717]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdef155b0 a2=0 a3=1 items=0 ppid=3660 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.634000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 21 23:36:38.636000 audit[3718]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3718 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:38.636000 audit[3718]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdd339930 a2=0 a3=1 items=0 ppid=3660 pid=3718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.636000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 21 23:36:38.640000 audit[3719]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3719 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:38.640000 audit[3719]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc58aa320 a2=0 a3=1 items=0 ppid=3660 pid=3719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.640000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 21 23:36:38.751000 audit[3720]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3720 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:38.751000 audit[3720]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe5e9e370 a2=0 a3=1 items=0 ppid=3660 pid=3720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.751000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 21 23:36:38.757000 audit[3722]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3722 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:38.757000 audit[3722]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffff821e90 a2=0 a3=1 items=0 ppid=3660 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.757000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 21 23:36:38.766000 audit[3725]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3725 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:38.766000 audit[3725]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff23d22e0 a2=0 a3=1 items=0 ppid=3660 pid=3725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.766000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 21 23:36:38.770000 audit[3726]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3726 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:38.770000 audit[3726]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffb458430 a2=0 a3=1 items=0 ppid=3660 pid=3726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.770000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 21 23:36:38.777000 audit[3728]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3728 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:38.777000 audit[3728]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc87a7b60 a2=0 a3=1 items=0 ppid=3660 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.777000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 21 23:36:38.780000 audit[3729]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3729 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:38.780000 audit[3729]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff9611170 a2=0 a3=1 items=0 ppid=3660 pid=3729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.780000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 21 23:36:38.787000 audit[3731]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3731 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:38.787000 audit[3731]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe20f92c0 a2=0 a3=1 items=0 ppid=3660 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.787000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 21 23:36:38.799000 audit[3734]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3734 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:38.799000 audit[3734]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff2391d70 a2=0 a3=1 items=0 ppid=3660 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.799000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 21 23:36:38.803000 audit[3735]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3735 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:38.803000 audit[3735]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffefd92d40 a2=0 a3=1 items=0 ppid=3660 pid=3735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.803000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 21 23:36:38.815000 audit[3737]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3737 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:38.815000 audit[3737]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe8d53c60 a2=0 a3=1 items=0 ppid=3660 pid=3737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.815000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 21 23:36:38.820000 audit[3738]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3738 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:38.820000 audit[3738]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff75d0fc0 a2=0 a3=1 items=0 ppid=3660 pid=3738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.820000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 21 23:36:38.826000 audit[3740]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3740 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:38.826000 audit[3740]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd5711db0 a2=0 a3=1 items=0 ppid=3660 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.826000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 21 23:36:38.835000 audit[3743]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3743 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:38.835000 audit[3743]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffca9c7c50 a2=0 a3=1 items=0 ppid=3660 pid=3743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.835000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 21 23:36:38.843000 audit[3746]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3746 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:38.843000 audit[3746]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff1e854e0 a2=0 a3=1 items=0 ppid=3660 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.843000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 21 23:36:38.846000 audit[3747]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3747 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:38.846000 audit[3747]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe564b3a0 a2=0 a3=1 items=0 ppid=3660 pid=3747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.846000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 21 23:36:38.851000 audit[3749]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3749 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:38.851000 audit[3749]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffcff50980 a2=0 a3=1 items=0 ppid=3660 pid=3749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.851000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 21 23:36:38.859000 audit[3752]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3752 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:38.859000 audit[3752]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff4daff40 a2=0 a3=1 items=0 ppid=3660 pid=3752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.859000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 21 23:36:38.862000 audit[3753]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3753 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:38.862000 audit[3753]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe03dae80 a2=0 a3=1 items=0 ppid=3660 pid=3753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.862000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 21 23:36:38.867000 audit[3755]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3755 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 23:36:38.867000 audit[3755]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffffbdbbe30 a2=0 a3=1 items=0 ppid=3660 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.867000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 21 23:36:38.907000 audit[3761]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3761 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:36:38.907000 audit[3761]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffddcc7ac0 a2=0 a3=1 items=0 ppid=3660 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.907000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:36:38.918000 audit[3761]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3761 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:36:38.918000 audit[3761]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffddcc7ac0 a2=0 a3=1 items=0 ppid=3660 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.918000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:36:38.925000 audit[3766]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3766 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:38.925000 audit[3766]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffff5712c0 a2=0 a3=1 items=0 ppid=3660 pid=3766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.925000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 21 23:36:38.933000 audit[3768]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3768 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:38.933000 audit[3768]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe3edac60 a2=0 a3=1 items=0 ppid=3660 pid=3768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.933000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 21 23:36:38.943000 audit[3771]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3771 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:38.943000 audit[3771]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffcf3025c0 a2=0 a3=1 items=0 ppid=3660 pid=3771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.943000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 21 23:36:38.946000 audit[3772]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3772 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:38.946000 audit[3772]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff0012d90 a2=0 a3=1 items=0 ppid=3660 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.946000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 21 23:36:38.953000 audit[3774]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3774 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:38.953000 audit[3774]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe50bf630 a2=0 a3=1 items=0 ppid=3660 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.953000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 21 23:36:38.956000 audit[3775]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3775 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:38.956000 audit[3775]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd85d7a50 a2=0 a3=1 items=0 ppid=3660 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.956000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 21 23:36:38.962000 audit[3777]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3777 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:38.962000 audit[3777]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffee586af0 a2=0 a3=1 items=0 ppid=3660 pid=3777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.962000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 21 23:36:38.971000 audit[3780]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3780 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:38.971000 audit[3780]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffce321210 a2=0 a3=1 items=0 ppid=3660 pid=3780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.971000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 21 23:36:38.975000 audit[3781]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3781 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:38.975000 audit[3781]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe30570d0 a2=0 a3=1 items=0 ppid=3660 pid=3781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.975000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 21 23:36:38.981000 audit[3783]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3783 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:38.981000 audit[3783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc5834390 a2=0 a3=1 items=0 ppid=3660 pid=3783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.981000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 21 23:36:38.984000 audit[3784]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3784 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:38.984000 audit[3784]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc9b06ed0 a2=0 a3=1 items=0 ppid=3660 pid=3784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.984000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 21 23:36:38.994000 audit[3786]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3786 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:38.994000 audit[3786]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffddda6200 a2=0 a3=1 items=0 ppid=3660 pid=3786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:38.994000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 21 23:36:39.007000 audit[3789]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3789 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:39.007000 audit[3789]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff3db42a0 a2=0 a3=1 items=0 ppid=3660 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:39.007000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 21 23:36:39.017000 audit[3792]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3792 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:39.017000 audit[3792]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd80f8520 a2=0 a3=1 items=0 ppid=3660 pid=3792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:39.017000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 21 23:36:39.020000 audit[3793]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3793 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:39.020000 audit[3793]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffca9b790 a2=0 a3=1 items=0 ppid=3660 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:39.020000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 21 23:36:39.029000 audit[3795]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3795 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:39.029000 audit[3795]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffd22603c0 a2=0 a3=1 items=0 ppid=3660 pid=3795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:39.029000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 21 23:36:39.038000 audit[3798]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3798 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:39.038000 audit[3798]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc4e435c0 a2=0 a3=1 items=0 ppid=3660 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:39.038000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 21 23:36:39.042000 audit[3799]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3799 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:39.042000 audit[3799]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa02a220 a2=0 a3=1 items=0 ppid=3660 pid=3799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:39.042000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 21 23:36:39.050000 audit[3801]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3801 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:39.050000 audit[3801]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffdd5928e0 a2=0 a3=1 items=0 ppid=3660 pid=3801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:39.050000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 21 23:36:39.053000 audit[3802]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3802 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:39.053000 audit[3802]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc4f49a50 a2=0 a3=1 items=0 ppid=3660 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:39.053000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 21 23:36:39.060000 audit[3804]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3804 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:39.060000 audit[3804]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffcbb64630 a2=0 a3=1 items=0 ppid=3660 pid=3804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:39.060000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 23:36:39.069000 audit[3807]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3807 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 23:36:39.069000 audit[3807]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffff6fa930 a2=0 a3=1 items=0 ppid=3660 pid=3807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:39.069000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 23:36:39.078000 audit[3809]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3809 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 21 23:36:39.078000 audit[3809]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffe008ad10 a2=0 a3=1 items=0 ppid=3660 pid=3809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:39.078000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:36:39.080000 audit[3809]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3809 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 21 23:36:39.080000 audit[3809]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffe008ad10 a2=0 a3=1 items=0 ppid=3660 pid=3809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:39.080000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:36:39.212560 kubelet[3515]: I0121 23:36:39.211029 3515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xgsbk" podStartSLOduration=2.21100249 podStartE2EDuration="2.21100249s" podCreationTimestamp="2026-01-21 23:36:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 23:36:39.210526544 +0000 UTC m=+5.512994633" watchObservedRunningTime="2026-01-21 23:36:39.21100249 +0000 UTC m=+5.513470555" Jan 21 23:36:40.589285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1345539890.mount: Deactivated successfully. Jan 21 23:36:43.251026 containerd[2003]: time="2026-01-21T23:36:43.250720928Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:36:43.254060 containerd[2003]: time="2026-01-21T23:36:43.253921371Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Jan 21 23:36:43.256725 containerd[2003]: time="2026-01-21T23:36:43.256622984Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:36:43.262958 containerd[2003]: time="2026-01-21T23:36:43.262111390Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:36:43.263848 containerd[2003]: time="2026-01-21T23:36:43.263772295Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 5.050520504s" Jan 21 23:36:43.263848 containerd[2003]: time="2026-01-21T23:36:43.263839438Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 21 23:36:43.270741 containerd[2003]: time="2026-01-21T23:36:43.270682099Z" level=info msg="CreateContainer within sandbox \"e3afba8be03b972543baa3768dcace7e7c37850f8acca9fefeaf78b77116cfbb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 21 23:36:43.292334 containerd[2003]: time="2026-01-21T23:36:43.292268792Z" level=info msg="Container 5fb285096e5faadb7c2b7342614dcf496a0d141ee692d1b3308f4f8c4e335b4f: CDI devices from CRI Config.CDIDevices: []" Jan 21 23:36:43.309476 containerd[2003]: time="2026-01-21T23:36:43.309408470Z" level=info msg="CreateContainer within sandbox \"e3afba8be03b972543baa3768dcace7e7c37850f8acca9fefeaf78b77116cfbb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5fb285096e5faadb7c2b7342614dcf496a0d141ee692d1b3308f4f8c4e335b4f\"" Jan 21 23:36:43.310802 containerd[2003]: time="2026-01-21T23:36:43.310754749Z" level=info msg="StartContainer for \"5fb285096e5faadb7c2b7342614dcf496a0d141ee692d1b3308f4f8c4e335b4f\"" Jan 21 23:36:43.313790 containerd[2003]: time="2026-01-21T23:36:43.313593297Z" level=info msg="connecting to shim 5fb285096e5faadb7c2b7342614dcf496a0d141ee692d1b3308f4f8c4e335b4f" address="unix:///run/containerd/s/1b3b29a97b9a30a659661a4fc940fe5f3b3bb4aff9c3c58cb845e01cc0635c3d" protocol=ttrpc version=3 Jan 21 23:36:43.358408 systemd[1]: Started cri-containerd-5fb285096e5faadb7c2b7342614dcf496a0d141ee692d1b3308f4f8c4e335b4f.scope - libcontainer container 5fb285096e5faadb7c2b7342614dcf496a0d141ee692d1b3308f4f8c4e335b4f. Jan 21 23:36:43.385000 audit: BPF prog-id=153 op=LOAD Jan 21 23:36:43.387637 kernel: kauditd_printk_skb: 158 callbacks suppressed Jan 21 23:36:43.387746 kernel: audit: type=1334 audit(1769038603.385:526): prog-id=153 op=LOAD Jan 21 23:36:43.388000 audit: BPF prog-id=154 op=LOAD Jan 21 23:36:43.391560 kernel: audit: type=1334 audit(1769038603.388:527): prog-id=154 op=LOAD Jan 21 23:36:43.391694 kernel: audit: type=1300 audit(1769038603.388:527): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3617 pid=3818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:43.388000 audit[3818]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3617 pid=3818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:43.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566623238353039366535666161646237633262373334323631346463 Jan 21 23:36:43.404294 kernel: audit: type=1327 audit(1769038603.388:527): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566623238353039366535666161646237633262373334323631346463 Jan 21 23:36:43.404424 kernel: audit: type=1334 audit(1769038603.388:528): prog-id=154 op=UNLOAD Jan 21 23:36:43.388000 audit: BPF prog-id=154 op=UNLOAD Jan 21 23:36:43.388000 audit[3818]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3617 pid=3818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:43.412327 kernel: audit: type=1300 audit(1769038603.388:528): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3617 pid=3818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:43.412468 kernel: audit: type=1327 audit(1769038603.388:528): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566623238353039366535666161646237633262373334323631346463 Jan 21 23:36:43.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566623238353039366535666161646237633262373334323631346463 Jan 21 23:36:43.390000 audit: BPF prog-id=155 op=LOAD Jan 21 23:36:43.419773 kernel: audit: type=1334 audit(1769038603.390:529): prog-id=155 op=LOAD Jan 21 23:36:43.419966 kernel: audit: type=1300 audit(1769038603.390:529): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3617 pid=3818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:43.390000 audit[3818]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3617 pid=3818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:43.390000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566623238353039366535666161646237633262373334323631346463 Jan 21 23:36:43.431780 kernel: audit: type=1327 audit(1769038603.390:529): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566623238353039366535666161646237633262373334323631346463 Jan 21 23:36:43.397000 audit: BPF prog-id=156 op=LOAD Jan 21 23:36:43.397000 audit[3818]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3617 pid=3818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:43.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566623238353039366535666161646237633262373334323631346463 Jan 21 23:36:43.405000 audit: BPF prog-id=156 op=UNLOAD Jan 21 23:36:43.405000 audit[3818]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3617 pid=3818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:43.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566623238353039366535666161646237633262373334323631346463 Jan 21 23:36:43.405000 audit: BPF prog-id=155 op=UNLOAD Jan 21 23:36:43.405000 audit[3818]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3617 pid=3818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:43.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566623238353039366535666161646237633262373334323631346463 Jan 21 23:36:43.405000 audit: BPF prog-id=157 op=LOAD Jan 21 23:36:43.405000 audit[3818]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3617 pid=3818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:43.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566623238353039366535666161646237633262373334323631346463 Jan 21 23:36:43.475017 containerd[2003]: time="2026-01-21T23:36:43.474809497Z" level=info msg="StartContainer for \"5fb285096e5faadb7c2b7342614dcf496a0d141ee692d1b3308f4f8c4e335b4f\" returns successfully" Jan 21 23:36:52.156319 sudo[2357]: pam_unix(sudo:session): session closed for user root Jan 21 23:36:52.164592 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 21 23:36:52.164680 kernel: audit: type=1106 audit(1769038612.155:534): pid=2357 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 23:36:52.155000 audit[2357]: USER_END pid=2357 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 23:36:52.155000 audit[2357]: CRED_DISP pid=2357 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 23:36:52.173627 kernel: audit: type=1104 audit(1769038612.155:535): pid=2357 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 23:36:52.244690 sshd[2356]: Connection closed by 68.220.241.50 port 47158 Jan 21 23:36:52.245593 sshd-session[2353]: pam_unix(sshd:session): session closed for user core Jan 21 23:36:52.250000 audit[2353]: USER_END pid=2353 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:36:52.264909 systemd[1]: sshd@6-172.31.29.34:22-68.220.241.50:47158.service: Deactivated successfully. Jan 21 23:36:52.251000 audit[2353]: CRED_DISP pid=2353 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:36:52.271718 kernel: audit: type=1106 audit(1769038612.250:536): pid=2353 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:36:52.271822 kernel: audit: type=1104 audit(1769038612.251:537): pid=2353 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:36:52.273434 systemd[1]: session-7.scope: Deactivated successfully. Jan 21 23:36:52.274520 systemd[1]: session-7.scope: Consumed 12.296s CPU time, 222M memory peak. Jan 21 23:36:52.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.29.34:22-68.220.241.50:47158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:36:52.281032 kernel: audit: type=1131 audit(1769038612.265:538): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.29.34:22-68.220.241.50:47158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:36:52.278542 systemd-logind[1971]: Session 7 logged out. Waiting for processes to exit. Jan 21 23:36:52.287126 systemd-logind[1971]: Removed session 7. Jan 21 23:36:54.850000 audit[3902]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:36:54.850000 audit[3902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffcad553c0 a2=0 a3=1 items=0 ppid=3660 pid=3902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:54.862884 kernel: audit: type=1325 audit(1769038614.850:539): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:36:54.863027 kernel: audit: type=1300 audit(1769038614.850:539): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffcad553c0 a2=0 a3=1 items=0 ppid=3660 pid=3902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:54.850000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:36:54.866738 kernel: audit: type=1327 audit(1769038614.850:539): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:36:54.871376 kernel: audit: type=1325 audit(1769038614.866:540): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:36:54.866000 audit[3902]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:36:54.866000 audit[3902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcad553c0 a2=0 a3=1 items=0 ppid=3660 pid=3902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:54.866000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:36:54.882002 kernel: audit: type=1300 audit(1769038614.866:540): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcad553c0 a2=0 a3=1 items=0 ppid=3660 pid=3902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:55.002000 audit[3904]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3904 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:36:55.002000 audit[3904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffcd4c9fc0 a2=0 a3=1 items=0 ppid=3660 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:55.002000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:36:55.009000 audit[3904]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3904 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:36:55.009000 audit[3904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcd4c9fc0 a2=0 a3=1 items=0 ppid=3660 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:36:55.009000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:07.174000 audit[3907]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3907 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:07.177783 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 21 23:37:07.177869 kernel: audit: type=1325 audit(1769038627.174:543): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3907 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:07.174000 audit[3907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffccb817e0 a2=0 a3=1 items=0 ppid=3660 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:07.189388 kernel: audit: type=1300 audit(1769038627.174:543): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffccb817e0 a2=0 a3=1 items=0 ppid=3660 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:07.174000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:07.195159 kernel: audit: type=1327 audit(1769038627.174:543): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:07.182000 audit[3907]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3907 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:07.199859 kernel: audit: type=1325 audit(1769038627.182:544): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3907 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:07.182000 audit[3907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffccb817e0 a2=0 a3=1 items=0 ppid=3660 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:07.182000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:07.225064 kernel: audit: type=1300 audit(1769038627.182:544): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffccb817e0 a2=0 a3=1 items=0 ppid=3660 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:07.225200 kernel: audit: type=1327 audit(1769038627.182:544): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:07.269000 audit[3909]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3909 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:07.269000 audit[3909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff6319000 a2=0 a3=1 items=0 ppid=3660 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:07.281440 kernel: audit: type=1325 audit(1769038627.269:545): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3909 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:07.281739 kernel: audit: type=1300 audit(1769038627.269:545): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff6319000 a2=0 a3=1 items=0 ppid=3660 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:07.269000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:07.287195 kernel: audit: type=1327 audit(1769038627.269:545): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:07.291000 audit[3909]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3909 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:07.298120 kernel: audit: type=1325 audit(1769038627.291:546): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3909 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:07.291000 audit[3909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff6319000 a2=0 a3=1 items=0 ppid=3660 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:07.291000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:08.334000 audit[3911]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3911 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:08.334000 audit[3911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc0a660d0 a2=0 a3=1 items=0 ppid=3660 pid=3911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:08.334000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:08.338000 audit[3911]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3911 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:08.338000 audit[3911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc0a660d0 a2=0 a3=1 items=0 ppid=3660 pid=3911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:08.338000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:12.200000 audit[3915]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:12.202868 kernel: kauditd_printk_skb: 8 callbacks suppressed Jan 21 23:37:12.203023 kernel: audit: type=1325 audit(1769038632.200:549): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:12.200000 audit[3915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc62ea280 a2=0 a3=1 items=0 ppid=3660 pid=3915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:12.214470 kernel: audit: type=1300 audit(1769038632.200:549): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc62ea280 a2=0 a3=1 items=0 ppid=3660 pid=3915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:12.200000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:12.220783 kernel: audit: type=1327 audit(1769038632.200:549): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:12.223000 audit[3915]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:12.223000 audit[3915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc62ea280 a2=0 a3=1 items=0 ppid=3660 pid=3915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:12.237102 kernel: audit: type=1325 audit(1769038632.223:550): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:12.237336 kernel: audit: type=1300 audit(1769038632.223:550): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc62ea280 a2=0 a3=1 items=0 ppid=3660 pid=3915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:12.223000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:12.241056 kernel: audit: type=1327 audit(1769038632.223:550): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:12.263098 kubelet[3515]: I0121 23:37:12.262956 3515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-phb7p" podStartSLOduration=30.206744516 podStartE2EDuration="35.262934343s" podCreationTimestamp="2026-01-21 23:36:37 +0000 UTC" firstStartedPulling="2026-01-21 23:36:38.210131601 +0000 UTC m=+4.512599630" lastFinishedPulling="2026-01-21 23:36:43.26632144 +0000 UTC m=+9.568789457" observedRunningTime="2026-01-21 23:36:44.230628572 +0000 UTC m=+10.533096637" watchObservedRunningTime="2026-01-21 23:37:12.262934343 +0000 UTC m=+38.565402360" Jan 21 23:37:12.281157 systemd[1]: Created slice kubepods-besteffort-pod45550674_dbe5_483c_b31a_0b47fa5b7ff6.slice - libcontainer container kubepods-besteffort-pod45550674_dbe5_483c_b31a_0b47fa5b7ff6.slice. Jan 21 23:37:12.333866 kubelet[3515]: I0121 23:37:12.333349 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/45550674-dbe5-483c-b31a-0b47fa5b7ff6-typha-certs\") pod \"calico-typha-76dfd59bd5-sxr5h\" (UID: \"45550674-dbe5-483c-b31a-0b47fa5b7ff6\") " pod="calico-system/calico-typha-76dfd59bd5-sxr5h" Jan 21 23:37:12.334287 kubelet[3515]: I0121 23:37:12.334153 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t66z8\" (UniqueName: \"kubernetes.io/projected/45550674-dbe5-483c-b31a-0b47fa5b7ff6-kube-api-access-t66z8\") pod \"calico-typha-76dfd59bd5-sxr5h\" (UID: \"45550674-dbe5-483c-b31a-0b47fa5b7ff6\") " pod="calico-system/calico-typha-76dfd59bd5-sxr5h" Jan 21 23:37:12.334491 kubelet[3515]: I0121 23:37:12.334398 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45550674-dbe5-483c-b31a-0b47fa5b7ff6-tigera-ca-bundle\") pod \"calico-typha-76dfd59bd5-sxr5h\" (UID: \"45550674-dbe5-483c-b31a-0b47fa5b7ff6\") " pod="calico-system/calico-typha-76dfd59bd5-sxr5h" Jan 21 23:37:12.344000 audit[3917]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3917 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:12.344000 audit[3917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd75a8570 a2=0 a3=1 items=0 ppid=3660 pid=3917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:12.357630 kernel: audit: type=1325 audit(1769038632.344:551): table=filter:117 family=2 entries=22 op=nft_register_rule pid=3917 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:12.357756 kernel: audit: type=1300 audit(1769038632.344:551): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd75a8570 a2=0 a3=1 items=0 ppid=3660 pid=3917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:12.358010 kernel: audit: type=1327 audit(1769038632.344:551): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:12.344000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:12.364000 audit[3917]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3917 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:12.364000 audit[3917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd75a8570 a2=0 a3=1 items=0 ppid=3660 pid=3917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:12.364000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:12.370012 kernel: audit: type=1325 audit(1769038632.364:552): table=nat:118 family=2 entries=12 op=nft_register_rule pid=3917 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:12.497162 systemd[1]: Created slice kubepods-besteffort-podb927dfc5_4086_4d93_af46_9256696f4e42.slice - libcontainer container kubepods-besteffort-podb927dfc5_4086_4d93_af46_9256696f4e42.slice. Jan 21 23:37:12.536020 kubelet[3515]: I0121 23:37:12.535920 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b927dfc5-4086-4d93-af46-9256696f4e42-tigera-ca-bundle\") pod \"calico-node-5xxg7\" (UID: \"b927dfc5-4086-4d93-af46-9256696f4e42\") " pod="calico-system/calico-node-5xxg7" Jan 21 23:37:12.536404 kubelet[3515]: I0121 23:37:12.536339 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b927dfc5-4086-4d93-af46-9256696f4e42-var-lib-calico\") pod \"calico-node-5xxg7\" (UID: \"b927dfc5-4086-4d93-af46-9256696f4e42\") " pod="calico-system/calico-node-5xxg7" Jan 21 23:37:12.536629 kubelet[3515]: I0121 23:37:12.536604 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b927dfc5-4086-4d93-af46-9256696f4e42-xtables-lock\") pod \"calico-node-5xxg7\" (UID: \"b927dfc5-4086-4d93-af46-9256696f4e42\") " pod="calico-system/calico-node-5xxg7" Jan 21 23:37:12.536847 kubelet[3515]: I0121 23:37:12.536793 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b927dfc5-4086-4d93-af46-9256696f4e42-var-run-calico\") pod \"calico-node-5xxg7\" (UID: \"b927dfc5-4086-4d93-af46-9256696f4e42\") " pod="calico-system/calico-node-5xxg7" Jan 21 23:37:12.537253 kubelet[3515]: I0121 23:37:12.537142 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b927dfc5-4086-4d93-af46-9256696f4e42-policysync\") pod \"calico-node-5xxg7\" (UID: \"b927dfc5-4086-4d93-af46-9256696f4e42\") " pod="calico-system/calico-node-5xxg7" Jan 21 23:37:12.537546 kubelet[3515]: I0121 23:37:12.537450 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b927dfc5-4086-4d93-af46-9256696f4e42-cni-bin-dir\") pod \"calico-node-5xxg7\" (UID: \"b927dfc5-4086-4d93-af46-9256696f4e42\") " pod="calico-system/calico-node-5xxg7" Jan 21 23:37:12.537737 kubelet[3515]: I0121 23:37:12.537696 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b927dfc5-4086-4d93-af46-9256696f4e42-cni-log-dir\") pod \"calico-node-5xxg7\" (UID: \"b927dfc5-4086-4d93-af46-9256696f4e42\") " pod="calico-system/calico-node-5xxg7" Jan 21 23:37:12.538002 kubelet[3515]: I0121 23:37:12.537933 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b927dfc5-4086-4d93-af46-9256696f4e42-cni-net-dir\") pod \"calico-node-5xxg7\" (UID: \"b927dfc5-4086-4d93-af46-9256696f4e42\") " pod="calico-system/calico-node-5xxg7" Jan 21 23:37:12.538186 kubelet[3515]: I0121 23:37:12.538134 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b927dfc5-4086-4d93-af46-9256696f4e42-flexvol-driver-host\") pod \"calico-node-5xxg7\" (UID: \"b927dfc5-4086-4d93-af46-9256696f4e42\") " pod="calico-system/calico-node-5xxg7" Jan 21 23:37:12.538437 kubelet[3515]: I0121 23:37:12.538366 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b927dfc5-4086-4d93-af46-9256696f4e42-lib-modules\") pod \"calico-node-5xxg7\" (UID: \"b927dfc5-4086-4d93-af46-9256696f4e42\") " pod="calico-system/calico-node-5xxg7" Jan 21 23:37:12.538626 kubelet[3515]: I0121 23:37:12.538411 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b927dfc5-4086-4d93-af46-9256696f4e42-node-certs\") pod \"calico-node-5xxg7\" (UID: \"b927dfc5-4086-4d93-af46-9256696f4e42\") " pod="calico-system/calico-node-5xxg7" Jan 21 23:37:12.538828 kubelet[3515]: I0121 23:37:12.538692 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th7km\" (UniqueName: \"kubernetes.io/projected/b927dfc5-4086-4d93-af46-9256696f4e42-kube-api-access-th7km\") pod \"calico-node-5xxg7\" (UID: \"b927dfc5-4086-4d93-af46-9256696f4e42\") " pod="calico-system/calico-node-5xxg7" Jan 21 23:37:12.589929 containerd[2003]: time="2026-01-21T23:37:12.589846142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76dfd59bd5-sxr5h,Uid:45550674-dbe5-483c-b31a-0b47fa5b7ff6,Namespace:calico-system,Attempt:0,}" Jan 21 23:37:12.599189 kubelet[3515]: E0121 23:37:12.597969 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-spv6f" podUID="9c98ebba-3094-4d44-b58e-8378134e1be8" Jan 21 23:37:12.607560 kubelet[3515]: I0121 23:37:12.607485 3515 status_manager.go:890] "Failed to get status for pod" podUID="9c98ebba-3094-4d44-b58e-8378134e1be8" pod="calico-system/csi-node-driver-spv6f" err="pods \"csi-node-driver-spv6f\" is forbidden: User \"system:node:ip-172-31-29-34\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-29-34' and this object" Jan 21 23:37:12.642054 kubelet[3515]: I0121 23:37:12.641769 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9c98ebba-3094-4d44-b58e-8378134e1be8-registration-dir\") pod \"csi-node-driver-spv6f\" (UID: \"9c98ebba-3094-4d44-b58e-8378134e1be8\") " pod="calico-system/csi-node-driver-spv6f" Jan 21 23:37:12.643560 kubelet[3515]: E0121 23:37:12.643498 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.644234 kubelet[3515]: W0121 23:37:12.644024 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.644593 kubelet[3515]: E0121 23:37:12.644319 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.645387 kubelet[3515]: E0121 23:37:12.644750 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.645387 kubelet[3515]: W0121 23:37:12.644780 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.645387 kubelet[3515]: E0121 23:37:12.644812 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.645387 kubelet[3515]: E0121 23:37:12.645211 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.645387 kubelet[3515]: W0121 23:37:12.645233 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.645387 kubelet[3515]: E0121 23:37:12.645257 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.645387 kubelet[3515]: I0121 23:37:12.645296 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c98ebba-3094-4d44-b58e-8378134e1be8-kubelet-dir\") pod \"csi-node-driver-spv6f\" (UID: \"9c98ebba-3094-4d44-b58e-8378134e1be8\") " pod="calico-system/csi-node-driver-spv6f" Jan 21 23:37:12.647659 kubelet[3515]: E0121 23:37:12.646176 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.647659 kubelet[3515]: W0121 23:37:12.646208 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.647659 kubelet[3515]: E0121 23:37:12.646255 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.647659 kubelet[3515]: I0121 23:37:12.646335 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9c98ebba-3094-4d44-b58e-8378134e1be8-socket-dir\") pod \"csi-node-driver-spv6f\" (UID: \"9c98ebba-3094-4d44-b58e-8378134e1be8\") " pod="calico-system/csi-node-driver-spv6f" Jan 21 23:37:12.650996 kubelet[3515]: E0121 23:37:12.650121 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.650996 kubelet[3515]: W0121 23:37:12.650780 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.650996 kubelet[3515]: E0121 23:37:12.650857 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.654167 kubelet[3515]: E0121 23:37:12.653595 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.654167 kubelet[3515]: W0121 23:37:12.653635 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.654167 kubelet[3515]: E0121 23:37:12.653732 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.655239 kubelet[3515]: E0121 23:37:12.655187 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.656436 kubelet[3515]: W0121 23:37:12.655673 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.656436 kubelet[3515]: E0121 23:37:12.655761 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.657954 kubelet[3515]: E0121 23:37:12.657649 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.657954 kubelet[3515]: W0121 23:37:12.657718 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.657954 kubelet[3515]: E0121 23:37:12.657787 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.660415 kubelet[3515]: E0121 23:37:12.660150 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.660415 kubelet[3515]: W0121 23:37:12.660189 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.660415 kubelet[3515]: E0121 23:37:12.660257 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.660695 containerd[2003]: time="2026-01-21T23:37:12.660586515Z" level=info msg="connecting to shim 5ecb9e89d9de1b9057a220be2d88db3707efd80a6dde8bf8ed9ffceb47aeb0dc" address="unix:///run/containerd/s/b685b771b7f56e3269d4cc56342fda2df608b66f35a1dc893270b66c6f59af4c" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:37:12.662196 kubelet[3515]: E0121 23:37:12.661895 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.662196 kubelet[3515]: W0121 23:37:12.661932 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.662391 kubelet[3515]: E0121 23:37:12.662282 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.663701 kubelet[3515]: E0121 23:37:12.663452 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.663701 kubelet[3515]: W0121 23:37:12.663487 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.663701 kubelet[3515]: E0121 23:37:12.663564 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.664721 kubelet[3515]: E0121 23:37:12.664655 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.664721 kubelet[3515]: W0121 23:37:12.664685 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.665097 kubelet[3515]: E0121 23:37:12.664946 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.665962 kubelet[3515]: E0121 23:37:12.665932 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.666200 kubelet[3515]: W0121 23:37:12.666127 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.666339 kubelet[3515]: E0121 23:37:12.666311 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.666912 kubelet[3515]: E0121 23:37:12.666861 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.667131 kubelet[3515]: W0121 23:37:12.666887 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.667230 kubelet[3515]: E0121 23:37:12.667178 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.669292 kubelet[3515]: E0121 23:37:12.668957 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.669702 kubelet[3515]: W0121 23:37:12.669408 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.670283 kubelet[3515]: E0121 23:37:12.670121 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.672378 kubelet[3515]: E0121 23:37:12.672305 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.673027 kubelet[3515]: W0121 23:37:12.672923 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.673479 kubelet[3515]: E0121 23:37:12.673421 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.673593 kubelet[3515]: I0121 23:37:12.673496 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxswv\" (UniqueName: \"kubernetes.io/projected/9c98ebba-3094-4d44-b58e-8378134e1be8-kube-api-access-zxswv\") pod \"csi-node-driver-spv6f\" (UID: \"9c98ebba-3094-4d44-b58e-8378134e1be8\") " pod="calico-system/csi-node-driver-spv6f" Jan 21 23:37:12.677153 kubelet[3515]: E0121 23:37:12.676091 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.677153 kubelet[3515]: W0121 23:37:12.676779 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.677153 kubelet[3515]: E0121 23:37:12.677107 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.679096 kubelet[3515]: E0121 23:37:12.678598 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.679096 kubelet[3515]: W0121 23:37:12.679029 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.680130 kubelet[3515]: E0121 23:37:12.679572 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.681059 kubelet[3515]: E0121 23:37:12.680502 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.681059 kubelet[3515]: W0121 23:37:12.680543 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.681059 kubelet[3515]: E0121 23:37:12.681006 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.681059 kubelet[3515]: W0121 23:37:12.681026 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.683164 kubelet[3515]: E0121 23:37:12.681940 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.683164 kubelet[3515]: W0121 23:37:12.682000 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.683164 kubelet[3515]: E0121 23:37:12.682037 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.683164 kubelet[3515]: E0121 23:37:12.682075 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.683164 kubelet[3515]: E0121 23:37:12.683084 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.683164 kubelet[3515]: W0121 23:37:12.683116 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.683711 kubelet[3515]: E0121 23:37:12.683182 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.685572 kubelet[3515]: E0121 23:37:12.685511 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.685572 kubelet[3515]: W0121 23:37:12.685555 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.685762 kubelet[3515]: E0121 23:37:12.685590 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.685762 kubelet[3515]: E0121 23:37:12.685625 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.687255 kubelet[3515]: E0121 23:37:12.687137 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.687255 kubelet[3515]: W0121 23:37:12.687179 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.687255 kubelet[3515]: E0121 23:37:12.687246 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.688732 kubelet[3515]: E0121 23:37:12.688682 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.688732 kubelet[3515]: W0121 23:37:12.688722 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.689823 kubelet[3515]: E0121 23:37:12.688961 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.691526 kubelet[3515]: E0121 23:37:12.691136 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.694435 kubelet[3515]: W0121 23:37:12.691524 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.694730 kubelet[3515]: E0121 23:37:12.694647 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.698754 kubelet[3515]: E0121 23:37:12.698321 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.698754 kubelet[3515]: W0121 23:37:12.698397 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.702015 kubelet[3515]: E0121 23:37:12.698962 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.702015 kubelet[3515]: W0121 23:37:12.699095 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.703239 kubelet[3515]: E0121 23:37:12.699014 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.704032 kubelet[3515]: E0121 23:37:12.703558 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.706233 kubelet[3515]: E0121 23:37:12.706153 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.706233 kubelet[3515]: W0121 23:37:12.706222 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.706948 kubelet[3515]: E0121 23:37:12.706859 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.708275 kubelet[3515]: E0121 23:37:12.708210 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.708275 kubelet[3515]: W0121 23:37:12.708249 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.708531 kubelet[3515]: E0121 23:37:12.708479 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.709593 kubelet[3515]: E0121 23:37:12.709241 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.709593 kubelet[3515]: W0121 23:37:12.709277 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.710080 kubelet[3515]: E0121 23:37:12.710012 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.710080 kubelet[3515]: W0121 23:37:12.710075 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.710698 kubelet[3515]: E0121 23:37:12.710473 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.710698 kubelet[3515]: E0121 23:37:12.710553 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.711378 kubelet[3515]: E0121 23:37:12.711312 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.711378 kubelet[3515]: W0121 23:37:12.711401 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.712080 kubelet[3515]: E0121 23:37:12.711812 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.712080 kubelet[3515]: W0121 23:37:12.711844 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.712080 kubelet[3515]: E0121 23:37:12.711870 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.712080 kubelet[3515]: E0121 23:37:12.711899 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.712920 kubelet[3515]: E0121 23:37:12.712163 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.712920 kubelet[3515]: W0121 23:37:12.712204 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.712920 kubelet[3515]: E0121 23:37:12.712280 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.712920 kubelet[3515]: E0121 23:37:12.712660 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.712920 kubelet[3515]: W0121 23:37:12.712680 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.714515 kubelet[3515]: E0121 23:37:12.713203 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.714515 kubelet[3515]: W0121 23:37:12.713227 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.714515 kubelet[3515]: E0121 23:37:12.714053 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.714515 kubelet[3515]: E0121 23:37:12.714116 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.714515 kubelet[3515]: E0121 23:37:12.714259 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.714515 kubelet[3515]: W0121 23:37:12.714281 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.716118 kubelet[3515]: E0121 23:37:12.714599 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.716118 kubelet[3515]: E0121 23:37:12.714620 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.716118 kubelet[3515]: W0121 23:37:12.714636 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.716118 kubelet[3515]: E0121 23:37:12.714685 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.716118 kubelet[3515]: E0121 23:37:12.715160 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.716118 kubelet[3515]: W0121 23:37:12.715202 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.716118 kubelet[3515]: E0121 23:37:12.715268 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.717747 kubelet[3515]: E0121 23:37:12.717275 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.717747 kubelet[3515]: W0121 23:37:12.717303 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.717747 kubelet[3515]: E0121 23:37:12.717635 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.717747 kubelet[3515]: W0121 23:37:12.717652 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.720204 kubelet[3515]: E0121 23:37:12.717953 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.720204 kubelet[3515]: W0121 23:37:12.717969 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.720204 kubelet[3515]: E0121 23:37:12.718153 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.720204 kubelet[3515]: E0121 23:37:12.718183 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.720204 kubelet[3515]: E0121 23:37:12.718517 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.720204 kubelet[3515]: W0121 23:37:12.718536 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.720204 kubelet[3515]: E0121 23:37:12.718827 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.720204 kubelet[3515]: W0121 23:37:12.718844 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.720204 kubelet[3515]: E0121 23:37:12.719051 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.722005 kubelet[3515]: E0121 23:37:12.721036 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.722005 kubelet[3515]: W0121 23:37:12.721082 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.722005 kubelet[3515]: E0121 23:37:12.721564 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.722267 kubelet[3515]: W0121 23:37:12.722150 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.722627 kubelet[3515]: E0121 23:37:12.722500 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.722627 kubelet[3515]: W0121 23:37:12.722530 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.722952 kubelet[3515]: E0121 23:37:12.722790 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.722952 kubelet[3515]: W0121 23:37:12.722817 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.724904 kubelet[3515]: E0121 23:37:12.723110 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.724904 kubelet[3515]: W0121 23:37:12.723137 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.724904 kubelet[3515]: E0121 23:37:12.724310 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.724904 kubelet[3515]: W0121 23:37:12.724337 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.724904 kubelet[3515]: E0121 23:37:12.724368 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.724904 kubelet[3515]: E0121 23:37:12.724424 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.724904 kubelet[3515]: E0121 23:37:12.724708 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.727550 kubelet[3515]: E0121 23:37:12.726882 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.727550 kubelet[3515]: W0121 23:37:12.727024 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.727550 kubelet[3515]: E0121 23:37:12.727067 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.728508 kubelet[3515]: E0121 23:37:12.728315 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.730023 kubelet[3515]: W0121 23:37:12.728703 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.734020 kubelet[3515]: E0121 23:37:12.731063 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.734020 kubelet[3515]: E0121 23:37:12.731522 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.734020 kubelet[3515]: W0121 23:37:12.731543 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.734020 kubelet[3515]: E0121 23:37:12.731567 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.734020 kubelet[3515]: E0121 23:37:12.731610 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.734020 kubelet[3515]: E0121 23:37:12.731643 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.736388 kubelet[3515]: E0121 23:37:12.734576 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.736388 kubelet[3515]: E0121 23:37:12.734637 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.740012 kubelet[3515]: E0121 23:37:12.739063 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.744248 kubelet[3515]: E0121 23:37:12.742090 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.744248 kubelet[3515]: W0121 23:37:12.744120 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.744248 kubelet[3515]: E0121 23:37:12.744174 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.750293 kubelet[3515]: E0121 23:37:12.748483 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.750293 kubelet[3515]: W0121 23:37:12.748531 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.752436 kubelet[3515]: E0121 23:37:12.750570 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.752436 kubelet[3515]: I0121 23:37:12.750959 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9c98ebba-3094-4d44-b58e-8378134e1be8-varrun\") pod \"csi-node-driver-spv6f\" (UID: \"9c98ebba-3094-4d44-b58e-8378134e1be8\") " pod="calico-system/csi-node-driver-spv6f" Jan 21 23:37:12.754621 kubelet[3515]: E0121 23:37:12.753223 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.755439 kubelet[3515]: W0121 23:37:12.754780 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.755439 kubelet[3515]: E0121 23:37:12.754854 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.757272 kubelet[3515]: E0121 23:37:12.756718 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.758380 kubelet[3515]: W0121 23:37:12.757508 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.758380 kubelet[3515]: E0121 23:37:12.758177 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.763733 kubelet[3515]: E0121 23:37:12.763574 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.763733 kubelet[3515]: W0121 23:37:12.763610 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.763733 kubelet[3515]: E0121 23:37:12.763682 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.765741 kubelet[3515]: E0121 23:37:12.765061 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.766589 kubelet[3515]: W0121 23:37:12.766143 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.766589 kubelet[3515]: E0121 23:37:12.766231 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.770741 kubelet[3515]: E0121 23:37:12.770534 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.770741 kubelet[3515]: W0121 23:37:12.770572 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.770741 kubelet[3515]: E0121 23:37:12.770650 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.772319 kubelet[3515]: E0121 23:37:12.772283 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.772741 kubelet[3515]: W0121 23:37:12.772505 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.772741 kubelet[3515]: E0121 23:37:12.772655 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.775760 kubelet[3515]: E0121 23:37:12.775486 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.776140 kubelet[3515]: W0121 23:37:12.775941 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.776140 kubelet[3515]: E0121 23:37:12.776046 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.777395 kubelet[3515]: E0121 23:37:12.777189 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.779112 kubelet[3515]: W0121 23:37:12.779059 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.779511 kubelet[3515]: E0121 23:37:12.779327 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.809855 systemd[1]: Started cri-containerd-5ecb9e89d9de1b9057a220be2d88db3707efd80a6dde8bf8ed9ffceb47aeb0dc.scope - libcontainer container 5ecb9e89d9de1b9057a220be2d88db3707efd80a6dde8bf8ed9ffceb47aeb0dc. Jan 21 23:37:12.835150 kubelet[3515]: E0121 23:37:12.835092 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.835430 kubelet[3515]: W0121 23:37:12.835402 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.835553 kubelet[3515]: E0121 23:37:12.835530 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.852634 kubelet[3515]: E0121 23:37:12.852592 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.852634 kubelet[3515]: W0121 23:37:12.852628 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.853074 kubelet[3515]: E0121 23:37:12.852659 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.853288 kubelet[3515]: E0121 23:37:12.853255 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.853382 kubelet[3515]: W0121 23:37:12.853287 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.853382 kubelet[3515]: E0121 23:37:12.853327 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.854692 kubelet[3515]: E0121 23:37:12.854641 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.854692 kubelet[3515]: W0121 23:37:12.854681 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.855537 kubelet[3515]: E0121 23:37:12.854731 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.855537 kubelet[3515]: E0121 23:37:12.855144 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.855537 kubelet[3515]: W0121 23:37:12.855186 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.855537 kubelet[3515]: E0121 23:37:12.855299 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.856335 kubelet[3515]: E0121 23:37:12.855683 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.856335 kubelet[3515]: W0121 23:37:12.855705 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.856335 kubelet[3515]: E0121 23:37:12.856128 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.856809 kubelet[3515]: E0121 23:37:12.856734 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.856809 kubelet[3515]: W0121 23:37:12.856761 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.856809 kubelet[3515]: E0121 23:37:12.857072 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.857585 kubelet[3515]: E0121 23:37:12.857343 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.857585 kubelet[3515]: W0121 23:37:12.857362 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.857687 kubelet[3515]: E0121 23:37:12.857592 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.857742 kubelet[3515]: E0121 23:37:12.857697 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.857742 kubelet[3515]: W0121 23:37:12.857712 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.858936 kubelet[3515]: E0121 23:37:12.858866 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.859462 kubelet[3515]: E0121 23:37:12.859419 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.859462 kubelet[3515]: W0121 23:37:12.859454 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.859669 kubelet[3515]: E0121 23:37:12.859516 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.860072 kubelet[3515]: E0121 23:37:12.859792 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.860072 kubelet[3515]: W0121 23:37:12.859821 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.860072 kubelet[3515]: E0121 23:37:12.860002 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.860820 kubelet[3515]: E0121 23:37:12.860179 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.860820 kubelet[3515]: W0121 23:37:12.860196 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.860820 kubelet[3515]: E0121 23:37:12.860246 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.860820 kubelet[3515]: E0121 23:37:12.860517 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.860820 kubelet[3515]: W0121 23:37:12.860533 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.860820 kubelet[3515]: E0121 23:37:12.860691 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.861639 kubelet[3515]: E0121 23:37:12.860846 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.861639 kubelet[3515]: W0121 23:37:12.860862 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.861639 kubelet[3515]: E0121 23:37:12.861144 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.861639 kubelet[3515]: E0121 23:37:12.861282 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.861639 kubelet[3515]: W0121 23:37:12.861302 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.861639 kubelet[3515]: E0121 23:37:12.861512 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.862747 kubelet[3515]: E0121 23:37:12.861918 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.862747 kubelet[3515]: W0121 23:37:12.861938 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.863822 kubelet[3515]: E0121 23:37:12.863062 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.863822 kubelet[3515]: E0121 23:37:12.863307 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.863822 kubelet[3515]: W0121 23:37:12.863327 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.863822 kubelet[3515]: E0121 23:37:12.863643 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.863822 kubelet[3515]: E0121 23:37:12.863676 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.863822 kubelet[3515]: W0121 23:37:12.863693 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.863822 kubelet[3515]: E0121 23:37:12.863807 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.865439 kubelet[3515]: E0121 23:37:12.864294 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.865439 kubelet[3515]: W0121 23:37:12.864316 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.865439 kubelet[3515]: E0121 23:37:12.864601 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.865439 kubelet[3515]: E0121 23:37:12.864943 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.865439 kubelet[3515]: W0121 23:37:12.864962 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.865439 kubelet[3515]: E0121 23:37:12.865337 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.866259 kubelet[3515]: E0121 23:37:12.865725 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.866259 kubelet[3515]: W0121 23:37:12.865747 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.866259 kubelet[3515]: E0121 23:37:12.865864 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.867083 kubelet[3515]: E0121 23:37:12.866274 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.867083 kubelet[3515]: W0121 23:37:12.866297 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.867083 kubelet[3515]: E0121 23:37:12.866863 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.867083 kubelet[3515]: W0121 23:37:12.866886 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.868385 kubelet[3515]: E0121 23:37:12.867377 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.868385 kubelet[3515]: W0121 23:37:12.867401 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.868385 kubelet[3515]: E0121 23:37:12.866793 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.868385 kubelet[3515]: E0121 23:37:12.867545 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.868385 kubelet[3515]: E0121 23:37:12.867575 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.868385 kubelet[3515]: E0121 23:37:12.867952 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.868385 kubelet[3515]: W0121 23:37:12.868026 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.868385 kubelet[3515]: E0121 23:37:12.868079 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.869878 kubelet[3515]: E0121 23:37:12.869100 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.869878 kubelet[3515]: W0121 23:37:12.869136 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.869878 kubelet[3515]: E0121 23:37:12.869170 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.878000 audit: BPF prog-id=158 op=LOAD Jan 21 23:37:12.879000 audit: BPF prog-id=159 op=LOAD Jan 21 23:37:12.879000 audit[3967]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3936 pid=3967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:12.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565636239653839643964653162393035376132323062653264383864 Jan 21 23:37:12.880000 audit: BPF prog-id=159 op=UNLOAD Jan 21 23:37:12.880000 audit[3967]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3936 pid=3967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:12.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565636239653839643964653162393035376132323062653264383864 Jan 21 23:37:12.880000 audit: BPF prog-id=160 op=LOAD Jan 21 23:37:12.880000 audit[3967]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3936 pid=3967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:12.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565636239653839643964653162393035376132323062653264383864 Jan 21 23:37:12.880000 audit: BPF prog-id=161 op=LOAD Jan 21 23:37:12.880000 audit[3967]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3936 pid=3967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:12.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565636239653839643964653162393035376132323062653264383864 Jan 21 23:37:12.880000 audit: BPF prog-id=161 op=UNLOAD Jan 21 23:37:12.880000 audit[3967]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3936 pid=3967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:12.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565636239653839643964653162393035376132323062653264383864 Jan 21 23:37:12.881000 audit: BPF prog-id=160 op=UNLOAD Jan 21 23:37:12.881000 audit[3967]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3936 pid=3967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:12.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565636239653839643964653162393035376132323062653264383864 Jan 21 23:37:12.881000 audit: BPF prog-id=162 op=LOAD Jan 21 23:37:12.881000 audit[3967]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3936 pid=3967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:12.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565636239653839643964653162393035376132323062653264383864 Jan 21 23:37:12.899060 kubelet[3515]: E0121 23:37:12.899003 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:12.899060 kubelet[3515]: W0121 23:37:12.899045 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:12.899254 kubelet[3515]: E0121 23:37:12.899080 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:12.968347 containerd[2003]: time="2026-01-21T23:37:12.968266918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76dfd59bd5-sxr5h,Uid:45550674-dbe5-483c-b31a-0b47fa5b7ff6,Namespace:calico-system,Attempt:0,} returns sandbox id \"5ecb9e89d9de1b9057a220be2d88db3707efd80a6dde8bf8ed9ffceb47aeb0dc\"" Jan 21 23:37:12.972471 containerd[2003]: time="2026-01-21T23:37:12.972410546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 21 23:37:13.109405 containerd[2003]: time="2026-01-21T23:37:13.109327861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5xxg7,Uid:b927dfc5-4086-4d93-af46-9256696f4e42,Namespace:calico-system,Attempt:0,}" Jan 21 23:37:13.151012 containerd[2003]: time="2026-01-21T23:37:13.150892852Z" level=info msg="connecting to shim e85b974322570a0f75f1ed5127d457997918e814fad19f38db4758482d922b11" address="unix:///run/containerd/s/be305f90218e2dfa69180a37e29f01fd4127a72dede4f032b35994f0ce313da0" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:37:13.195365 systemd[1]: Started cri-containerd-e85b974322570a0f75f1ed5127d457997918e814fad19f38db4758482d922b11.scope - libcontainer container e85b974322570a0f75f1ed5127d457997918e814fad19f38db4758482d922b11. Jan 21 23:37:13.214000 audit: BPF prog-id=163 op=LOAD Jan 21 23:37:13.215000 audit: BPF prog-id=164 op=LOAD Jan 21 23:37:13.215000 audit[4081]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=4069 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:13.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538356239373433323235373061306637356631656435313237643435 Jan 21 23:37:13.216000 audit: BPF prog-id=164 op=UNLOAD Jan 21 23:37:13.216000 audit[4081]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4069 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:13.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538356239373433323235373061306637356631656435313237643435 Jan 21 23:37:13.216000 audit: BPF prog-id=165 op=LOAD Jan 21 23:37:13.216000 audit[4081]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=4069 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:13.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538356239373433323235373061306637356631656435313237643435 Jan 21 23:37:13.217000 audit: BPF prog-id=166 op=LOAD Jan 21 23:37:13.217000 audit[4081]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=4069 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:13.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538356239373433323235373061306637356631656435313237643435 Jan 21 23:37:13.217000 audit: BPF prog-id=166 op=UNLOAD Jan 21 23:37:13.217000 audit[4081]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4069 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:13.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538356239373433323235373061306637356631656435313237643435 Jan 21 23:37:13.218000 audit: BPF prog-id=165 op=UNLOAD Jan 21 23:37:13.218000 audit[4081]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4069 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:13.218000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538356239373433323235373061306637356631656435313237643435 Jan 21 23:37:13.218000 audit: BPF prog-id=167 op=LOAD Jan 21 23:37:13.218000 audit[4081]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=4069 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:13.218000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538356239373433323235373061306637356631656435313237643435 Jan 21 23:37:13.250578 containerd[2003]: time="2026-01-21T23:37:13.250527682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5xxg7,Uid:b927dfc5-4086-4d93-af46-9256696f4e42,Namespace:calico-system,Attempt:0,} returns sandbox id \"e85b974322570a0f75f1ed5127d457997918e814fad19f38db4758482d922b11\"" Jan 21 23:37:13.382000 audit[4106]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=4106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:13.382000 audit[4106]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffffd209590 a2=0 a3=1 items=0 ppid=3660 pid=4106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:13.382000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:13.388000 audit[4106]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=4106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:13.388000 audit[4106]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffd209590 a2=0 a3=1 items=0 ppid=3660 pid=4106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:13.388000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:14.113583 kubelet[3515]: E0121 23:37:14.113487 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-spv6f" podUID="9c98ebba-3094-4d44-b58e-8378134e1be8" Jan 21 23:37:14.186407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount88057296.mount: Deactivated successfully. Jan 21 23:37:15.075005 containerd[2003]: time="2026-01-21T23:37:15.074817570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:37:15.076685 containerd[2003]: time="2026-01-21T23:37:15.076613576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Jan 21 23:37:15.077067 containerd[2003]: time="2026-01-21T23:37:15.077031039Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:37:15.080524 containerd[2003]: time="2026-01-21T23:37:15.080454966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:37:15.082641 containerd[2003]: time="2026-01-21T23:37:15.081827608Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.109330118s" Jan 21 23:37:15.082641 containerd[2003]: time="2026-01-21T23:37:15.081879950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 21 23:37:15.083620 containerd[2003]: time="2026-01-21T23:37:15.083560525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 21 23:37:15.116524 containerd[2003]: time="2026-01-21T23:37:15.116476978Z" level=info msg="CreateContainer within sandbox \"5ecb9e89d9de1b9057a220be2d88db3707efd80a6dde8bf8ed9ffceb47aeb0dc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 21 23:37:15.130415 containerd[2003]: time="2026-01-21T23:37:15.130356255Z" level=info msg="Container c993ab3cfc7d14467641fd178c4f1898384d8a2e952ca2c943b8f49ff22f7ced: CDI devices from CRI Config.CDIDevices: []" Jan 21 23:37:15.138685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount420953322.mount: Deactivated successfully. Jan 21 23:37:15.149452 containerd[2003]: time="2026-01-21T23:37:15.149132934Z" level=info msg="CreateContainer within sandbox \"5ecb9e89d9de1b9057a220be2d88db3707efd80a6dde8bf8ed9ffceb47aeb0dc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c993ab3cfc7d14467641fd178c4f1898384d8a2e952ca2c943b8f49ff22f7ced\"" Jan 21 23:37:15.152106 containerd[2003]: time="2026-01-21T23:37:15.150503801Z" level=info msg="StartContainer for \"c993ab3cfc7d14467641fd178c4f1898384d8a2e952ca2c943b8f49ff22f7ced\"" Jan 21 23:37:15.153837 containerd[2003]: time="2026-01-21T23:37:15.153785299Z" level=info msg="connecting to shim c993ab3cfc7d14467641fd178c4f1898384d8a2e952ca2c943b8f49ff22f7ced" address="unix:///run/containerd/s/b685b771b7f56e3269d4cc56342fda2df608b66f35a1dc893270b66c6f59af4c" protocol=ttrpc version=3 Jan 21 23:37:15.196541 systemd[1]: Started cri-containerd-c993ab3cfc7d14467641fd178c4f1898384d8a2e952ca2c943b8f49ff22f7ced.scope - libcontainer container c993ab3cfc7d14467641fd178c4f1898384d8a2e952ca2c943b8f49ff22f7ced. Jan 21 23:37:15.223000 audit: BPF prog-id=168 op=LOAD Jan 21 23:37:15.225000 audit: BPF prog-id=169 op=LOAD Jan 21 23:37:15.225000 audit[4123]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3936 pid=4123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:15.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339393361623363666337643134343637363431666431373863346631 Jan 21 23:37:15.226000 audit: BPF prog-id=169 op=UNLOAD Jan 21 23:37:15.226000 audit[4123]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3936 pid=4123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:15.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339393361623363666337643134343637363431666431373863346631 Jan 21 23:37:15.227000 audit: BPF prog-id=170 op=LOAD Jan 21 23:37:15.227000 audit[4123]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3936 pid=4123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:15.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339393361623363666337643134343637363431666431373863346631 Jan 21 23:37:15.227000 audit: BPF prog-id=171 op=LOAD Jan 21 23:37:15.227000 audit[4123]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3936 pid=4123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:15.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339393361623363666337643134343637363431666431373863346631 Jan 21 23:37:15.228000 audit: BPF prog-id=171 op=UNLOAD Jan 21 23:37:15.228000 audit[4123]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3936 pid=4123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:15.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339393361623363666337643134343637363431666431373863346631 Jan 21 23:37:15.228000 audit: BPF prog-id=170 op=UNLOAD Jan 21 23:37:15.228000 audit[4123]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3936 pid=4123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:15.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339393361623363666337643134343637363431666431373863346631 Jan 21 23:37:15.229000 audit: BPF prog-id=172 op=LOAD Jan 21 23:37:15.229000 audit[4123]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3936 pid=4123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:15.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339393361623363666337643134343637363431666431373863346631 Jan 21 23:37:15.294958 containerd[2003]: time="2026-01-21T23:37:15.294809072Z" level=info msg="StartContainer for \"c993ab3cfc7d14467641fd178c4f1898384d8a2e952ca2c943b8f49ff22f7ced\" returns successfully" Jan 21 23:37:15.339060 kubelet[3515]: E0121 23:37:15.337143 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.339060 kubelet[3515]: W0121 23:37:15.337220 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.339060 kubelet[3515]: E0121 23:37:15.337255 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.342647 kubelet[3515]: E0121 23:37:15.342251 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.342647 kubelet[3515]: W0121 23:37:15.342297 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.342647 kubelet[3515]: E0121 23:37:15.342389 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.346207 kubelet[3515]: E0121 23:37:15.346088 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.346207 kubelet[3515]: W0121 23:37:15.346156 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.346520 kubelet[3515]: E0121 23:37:15.346445 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.347221 kubelet[3515]: E0121 23:37:15.347155 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.347221 kubelet[3515]: W0121 23:37:15.347184 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.347641 kubelet[3515]: E0121 23:37:15.347419 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.349004 kubelet[3515]: E0121 23:37:15.348142 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.349313 kubelet[3515]: W0121 23:37:15.348170 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.349313 kubelet[3515]: E0121 23:37:15.349194 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.349952 kubelet[3515]: E0121 23:37:15.349924 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.351149 kubelet[3515]: W0121 23:37:15.350139 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.351149 kubelet[3515]: E0121 23:37:15.351078 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.353096 kubelet[3515]: E0121 23:37:15.352378 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.353566 kubelet[3515]: W0121 23:37:15.353288 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.353566 kubelet[3515]: E0121 23:37:15.353330 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.355035 kubelet[3515]: E0121 23:37:15.354966 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.355353 kubelet[3515]: W0121 23:37:15.355192 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.355353 kubelet[3515]: E0121 23:37:15.355233 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.358026 kubelet[3515]: E0121 23:37:15.356900 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.358026 kubelet[3515]: W0121 23:37:15.356938 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.358473 kubelet[3515]: E0121 23:37:15.358250 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.358819 kubelet[3515]: E0121 23:37:15.358790 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.359076 kubelet[3515]: W0121 23:37:15.358911 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.359076 kubelet[3515]: E0121 23:37:15.358949 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.360905 kubelet[3515]: E0121 23:37:15.360675 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.360905 kubelet[3515]: W0121 23:37:15.360745 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.360905 kubelet[3515]: E0121 23:37:15.360780 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.361572 kubelet[3515]: E0121 23:37:15.361544 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.362189 kubelet[3515]: W0121 23:37:15.362030 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.362189 kubelet[3515]: E0121 23:37:15.362073 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.364015 kubelet[3515]: E0121 23:37:15.363809 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.364015 kubelet[3515]: W0121 23:37:15.363845 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.364015 kubelet[3515]: E0121 23:37:15.363879 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.365012 kubelet[3515]: E0121 23:37:15.364633 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.365012 kubelet[3515]: W0121 23:37:15.364666 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.365012 kubelet[3515]: E0121 23:37:15.364696 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.366336 kubelet[3515]: E0121 23:37:15.366120 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.366336 kubelet[3515]: W0121 23:37:15.366166 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.366336 kubelet[3515]: E0121 23:37:15.366219 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.373689 kubelet[3515]: I0121 23:37:15.373572 3515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-76dfd59bd5-sxr5h" podStartSLOduration=1.25996407 podStartE2EDuration="3.372769149s" podCreationTimestamp="2026-01-21 23:37:12 +0000 UTC" firstStartedPulling="2026-01-21 23:37:12.970551272 +0000 UTC m=+39.273019301" lastFinishedPulling="2026-01-21 23:37:15.083356351 +0000 UTC m=+41.385824380" observedRunningTime="2026-01-21 23:37:15.370721809 +0000 UTC m=+41.673189850" watchObservedRunningTime="2026-01-21 23:37:15.372769149 +0000 UTC m=+41.675237178" Jan 21 23:37:15.382018 kubelet[3515]: E0121 23:37:15.381927 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.382389 kubelet[3515]: W0121 23:37:15.382202 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.382826 kubelet[3515]: E0121 23:37:15.382496 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.384461 kubelet[3515]: E0121 23:37:15.384391 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.384461 kubelet[3515]: W0121 23:37:15.384434 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.384461 kubelet[3515]: E0121 23:37:15.384486 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.386014 kubelet[3515]: E0121 23:37:15.385324 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.386014 kubelet[3515]: W0121 23:37:15.385375 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.386014 kubelet[3515]: E0121 23:37:15.385697 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.386014 kubelet[3515]: W0121 23:37:15.385714 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.386014 kubelet[3515]: E0121 23:37:15.385740 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.386569 kubelet[3515]: E0121 23:37:15.386167 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.386569 kubelet[3515]: W0121 23:37:15.386190 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.386569 kubelet[3515]: E0121 23:37:15.386215 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.386828 kubelet[3515]: E0121 23:37:15.386648 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.386828 kubelet[3515]: W0121 23:37:15.386668 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.386828 kubelet[3515]: E0121 23:37:15.386692 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.388963 kubelet[3515]: E0121 23:37:15.387601 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.388963 kubelet[3515]: W0121 23:37:15.387639 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.388963 kubelet[3515]: E0121 23:37:15.387673 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.388963 kubelet[3515]: E0121 23:37:15.388171 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.388963 kubelet[3515]: E0121 23:37:15.388562 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.390110 kubelet[3515]: W0121 23:37:15.388193 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.390110 kubelet[3515]: E0121 23:37:15.389323 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.390110 kubelet[3515]: E0121 23:37:15.389711 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.390110 kubelet[3515]: W0121 23:37:15.389730 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.390110 kubelet[3515]: E0121 23:37:15.389755 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.392423 kubelet[3515]: E0121 23:37:15.392348 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.393390 kubelet[3515]: W0121 23:37:15.392386 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.393788 kubelet[3515]: E0121 23:37:15.393490 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.394186 kubelet[3515]: E0121 23:37:15.394008 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.394186 kubelet[3515]: W0121 23:37:15.394035 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.394186 kubelet[3515]: E0121 23:37:15.394065 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.395537 kubelet[3515]: E0121 23:37:15.395489 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.395537 kubelet[3515]: W0121 23:37:15.395524 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.395957 kubelet[3515]: E0121 23:37:15.395804 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.395957 kubelet[3515]: E0121 23:37:15.395841 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.395957 kubelet[3515]: W0121 23:37:15.395859 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.395957 kubelet[3515]: E0121 23:37:15.395905 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.396465 kubelet[3515]: E0121 23:37:15.396270 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.396465 kubelet[3515]: W0121 23:37:15.396289 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.396465 kubelet[3515]: E0121 23:37:15.396340 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.397176 kubelet[3515]: E0121 23:37:15.397116 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.397176 kubelet[3515]: W0121 23:37:15.397158 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.397176 kubelet[3515]: E0121 23:37:15.397202 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.397864 kubelet[3515]: E0121 23:37:15.397813 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.397864 kubelet[3515]: W0121 23:37:15.397859 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.398122 kubelet[3515]: E0121 23:37:15.397902 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.400225 kubelet[3515]: E0121 23:37:15.400166 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.400455 kubelet[3515]: W0121 23:37:15.400374 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.400694 kubelet[3515]: E0121 23:37:15.400572 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:15.401178 kubelet[3515]: E0121 23:37:15.401139 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:15.401178 kubelet[3515]: W0121 23:37:15.401175 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:15.401328 kubelet[3515]: E0121 23:37:15.401209 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.099039 kubelet[3515]: E0121 23:37:16.098537 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-spv6f" podUID="9c98ebba-3094-4d44-b58e-8378134e1be8" Jan 21 23:37:16.304412 containerd[2003]: time="2026-01-21T23:37:16.304349513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:37:16.307294 containerd[2003]: time="2026-01-21T23:37:16.307203725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=2517" Jan 21 23:37:16.309828 containerd[2003]: time="2026-01-21T23:37:16.309752126Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:37:16.315730 containerd[2003]: time="2026-01-21T23:37:16.315652394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:37:16.318291 containerd[2003]: time="2026-01-21T23:37:16.318219650Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.234597488s" Jan 21 23:37:16.318291 containerd[2003]: time="2026-01-21T23:37:16.318281683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 21 23:37:16.323459 containerd[2003]: time="2026-01-21T23:37:16.323193526Z" level=info msg="CreateContainer within sandbox \"e85b974322570a0f75f1ed5127d457997918e814fad19f38db4758482d922b11\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 21 23:37:16.341749 containerd[2003]: time="2026-01-21T23:37:16.341681797Z" level=info msg="Container 576dd48f95e01d87de1ed4e345f0c23b1dfa85fc1d79a46285025ea123ec0215: CDI devices from CRI Config.CDIDevices: []" Jan 21 23:37:16.365509 containerd[2003]: time="2026-01-21T23:37:16.365169504Z" level=info msg="CreateContainer within sandbox \"e85b974322570a0f75f1ed5127d457997918e814fad19f38db4758482d922b11\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"576dd48f95e01d87de1ed4e345f0c23b1dfa85fc1d79a46285025ea123ec0215\"" Jan 21 23:37:16.367633 containerd[2003]: time="2026-01-21T23:37:16.367517053Z" level=info msg="StartContainer for \"576dd48f95e01d87de1ed4e345f0c23b1dfa85fc1d79a46285025ea123ec0215\"" Jan 21 23:37:16.372349 containerd[2003]: time="2026-01-21T23:37:16.372288290Z" level=info msg="connecting to shim 576dd48f95e01d87de1ed4e345f0c23b1dfa85fc1d79a46285025ea123ec0215" address="unix:///run/containerd/s/be305f90218e2dfa69180a37e29f01fd4127a72dede4f032b35994f0ce313da0" protocol=ttrpc version=3 Jan 21 23:37:16.379703 kubelet[3515]: E0121 23:37:16.379649 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.379703 kubelet[3515]: W0121 23:37:16.379690 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.383879 kubelet[3515]: E0121 23:37:16.379725 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.383879 kubelet[3515]: E0121 23:37:16.380061 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.383879 kubelet[3515]: W0121 23:37:16.380079 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.383879 kubelet[3515]: E0121 23:37:16.380102 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.383879 kubelet[3515]: E0121 23:37:16.383738 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.383879 kubelet[3515]: W0121 23:37:16.383776 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.383879 kubelet[3515]: E0121 23:37:16.383812 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.390007 kubelet[3515]: E0121 23:37:16.389831 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.390007 kubelet[3515]: W0121 23:37:16.389876 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.390007 kubelet[3515]: E0121 23:37:16.389911 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.393015 kubelet[3515]: E0121 23:37:16.391481 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.393015 kubelet[3515]: W0121 23:37:16.391522 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.393015 kubelet[3515]: E0121 23:37:16.391557 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.394763 kubelet[3515]: E0121 23:37:16.394697 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.394763 kubelet[3515]: W0121 23:37:16.394756 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.395051 kubelet[3515]: E0121 23:37:16.394794 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.396425 kubelet[3515]: E0121 23:37:16.396373 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.396425 kubelet[3515]: W0121 23:37:16.396410 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.396746 kubelet[3515]: E0121 23:37:16.396445 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.397577 kubelet[3515]: E0121 23:37:16.397480 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.397577 kubelet[3515]: W0121 23:37:16.397516 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.397577 kubelet[3515]: E0121 23:37:16.397549 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.398823 kubelet[3515]: E0121 23:37:16.398781 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.398823 kubelet[3515]: W0121 23:37:16.398817 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.399311 kubelet[3515]: E0121 23:37:16.398852 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.399311 kubelet[3515]: E0121 23:37:16.399247 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.399311 kubelet[3515]: W0121 23:37:16.399266 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.399311 kubelet[3515]: E0121 23:37:16.399290 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.400372 kubelet[3515]: E0121 23:37:16.400286 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.400372 kubelet[3515]: W0121 23:37:16.400326 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.400372 kubelet[3515]: E0121 23:37:16.400360 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.402304 kubelet[3515]: E0121 23:37:16.402030 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.402304 kubelet[3515]: W0121 23:37:16.402068 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.402304 kubelet[3515]: E0121 23:37:16.402102 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.403914 kubelet[3515]: E0121 23:37:16.403739 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.403914 kubelet[3515]: W0121 23:37:16.403788 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.403914 kubelet[3515]: E0121 23:37:16.403822 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.405961 kubelet[3515]: E0121 23:37:16.405192 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.405961 kubelet[3515]: W0121 23:37:16.405222 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.405961 kubelet[3515]: E0121 23:37:16.405251 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.408030 kubelet[3515]: E0121 23:37:16.406626 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.408030 kubelet[3515]: W0121 23:37:16.406666 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.408030 kubelet[3515]: E0121 23:37:16.406698 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.408631 kubelet[3515]: E0121 23:37:16.408579 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.408631 kubelet[3515]: W0121 23:37:16.408622 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.408759 kubelet[3515]: E0121 23:37:16.408656 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.411406 kubelet[3515]: E0121 23:37:16.410501 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.411406 kubelet[3515]: W0121 23:37:16.410547 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.411406 kubelet[3515]: E0121 23:37:16.410582 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.412632 kubelet[3515]: E0121 23:37:16.412200 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.412632 kubelet[3515]: W0121 23:37:16.412240 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.413641 kubelet[3515]: E0121 23:37:16.413223 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.413641 kubelet[3515]: W0121 23:37:16.413264 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.413641 kubelet[3515]: E0121 23:37:16.413298 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.414268 kubelet[3515]: E0121 23:37:16.414063 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.415404 kubelet[3515]: E0121 23:37:16.415103 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.415404 kubelet[3515]: W0121 23:37:16.415152 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.415404 kubelet[3515]: E0121 23:37:16.415203 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.417617 kubelet[3515]: E0121 23:37:16.417270 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.417617 kubelet[3515]: W0121 23:37:16.417310 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.417617 kubelet[3515]: E0121 23:37:16.417396 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.419163 kubelet[3515]: E0121 23:37:16.419110 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.419163 kubelet[3515]: W0121 23:37:16.419149 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.419163 kubelet[3515]: E0121 23:37:16.419218 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.420531 kubelet[3515]: E0121 23:37:16.420484 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.420531 kubelet[3515]: W0121 23:37:16.420522 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.421158 kubelet[3515]: E0121 23:37:16.420934 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.422157 kubelet[3515]: E0121 23:37:16.422100 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.422157 kubelet[3515]: W0121 23:37:16.422138 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.422157 kubelet[3515]: E0121 23:37:16.422204 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.423596 kubelet[3515]: E0121 23:37:16.423540 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.423596 kubelet[3515]: W0121 23:37:16.423577 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.424063 kubelet[3515]: E0121 23:37:16.423825 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.427937 kubelet[3515]: E0121 23:37:16.425952 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.427937 kubelet[3515]: W0121 23:37:16.427065 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.427937 kubelet[3515]: E0121 23:37:16.427750 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.428344 kubelet[3515]: E0121 23:37:16.428221 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.428344 kubelet[3515]: W0121 23:37:16.428244 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.430143 kubelet[3515]: E0121 23:37:16.430051 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.432176 kubelet[3515]: E0121 23:37:16.432106 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.432590 kubelet[3515]: W0121 23:37:16.432558 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.432822 kubelet[3515]: E0121 23:37:16.432796 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.437520 kubelet[3515]: E0121 23:37:16.436394 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.437520 kubelet[3515]: W0121 23:37:16.436434 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.437520 kubelet[3515]: E0121 23:37:16.436478 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.438811 kubelet[3515]: E0121 23:37:16.438124 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.438811 kubelet[3515]: W0121 23:37:16.438164 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.438811 kubelet[3515]: E0121 23:37:16.438198 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.441325 kubelet[3515]: E0121 23:37:16.441154 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.442464 kubelet[3515]: W0121 23:37:16.441722 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.442464 kubelet[3515]: E0121 23:37:16.441785 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.445282 kubelet[3515]: E0121 23:37:16.445144 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.446287 kubelet[3515]: W0121 23:37:16.446145 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.446287 kubelet[3515]: E0121 23:37:16.446208 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.448250 kubelet[3515]: E0121 23:37:16.448132 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 23:37:16.450823 kubelet[3515]: W0121 23:37:16.448168 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 23:37:16.451161 kubelet[3515]: E0121 23:37:16.451032 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 23:37:16.461358 systemd[1]: Started cri-containerd-576dd48f95e01d87de1ed4e345f0c23b1dfa85fc1d79a46285025ea123ec0215.scope - libcontainer container 576dd48f95e01d87de1ed4e345f0c23b1dfa85fc1d79a46285025ea123ec0215. Jan 21 23:37:16.495000 audit[4257]: NETFILTER_CFG table=filter:121 family=2 entries=21 op=nft_register_rule pid=4257 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:16.495000 audit[4257]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffffe4e4aa0 a2=0 a3=1 items=0 ppid=3660 pid=4257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:16.495000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:16.501000 audit[4257]: NETFILTER_CFG table=nat:122 family=2 entries=19 op=nft_register_chain pid=4257 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:16.501000 audit[4257]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffffe4e4aa0 a2=0 a3=1 items=0 ppid=3660 pid=4257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:16.501000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:16.549000 audit: BPF prog-id=173 op=LOAD Jan 21 23:37:16.549000 audit[4203]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40002283e8 a2=98 a3=0 items=0 ppid=4069 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:16.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537366464343866393565303164383764653165643465333435663063 Jan 21 23:37:16.549000 audit: BPF prog-id=174 op=LOAD Jan 21 23:37:16.549000 audit[4203]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000228168 a2=98 a3=0 items=0 ppid=4069 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:16.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537366464343866393565303164383764653165643465333435663063 Jan 21 23:37:16.549000 audit: BPF prog-id=174 op=UNLOAD Jan 21 23:37:16.549000 audit[4203]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4069 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:16.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537366464343866393565303164383764653165643465333435663063 Jan 21 23:37:16.549000 audit: BPF prog-id=173 op=UNLOAD Jan 21 23:37:16.549000 audit[4203]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4069 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:16.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537366464343866393565303164383764653165643465333435663063 Jan 21 23:37:16.549000 audit: BPF prog-id=175 op=LOAD Jan 21 23:37:16.549000 audit[4203]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000228648 a2=98 a3=0 items=0 ppid=4069 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:16.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537366464343866393565303164383764653165643465333435663063 Jan 21 23:37:16.596659 containerd[2003]: time="2026-01-21T23:37:16.596256823Z" level=info msg="StartContainer for \"576dd48f95e01d87de1ed4e345f0c23b1dfa85fc1d79a46285025ea123ec0215\" returns successfully" Jan 21 23:37:16.639784 systemd[1]: cri-containerd-576dd48f95e01d87de1ed4e345f0c23b1dfa85fc1d79a46285025ea123ec0215.scope: Deactivated successfully. Jan 21 23:37:16.643000 audit: BPF prog-id=175 op=UNLOAD Jan 21 23:37:16.649770 containerd[2003]: time="2026-01-21T23:37:16.649717103Z" level=info msg="received container exit event container_id:\"576dd48f95e01d87de1ed4e345f0c23b1dfa85fc1d79a46285025ea123ec0215\" id:\"576dd48f95e01d87de1ed4e345f0c23b1dfa85fc1d79a46285025ea123ec0215\" pid:4251 exited_at:{seconds:1769038636 nanos:648324684}" Jan 21 23:37:16.696417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-576dd48f95e01d87de1ed4e345f0c23b1dfa85fc1d79a46285025ea123ec0215-rootfs.mount: Deactivated successfully. Jan 21 23:37:17.362341 containerd[2003]: time="2026-01-21T23:37:17.362274697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 21 23:37:18.098825 kubelet[3515]: E0121 23:37:18.098709 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-spv6f" podUID="9c98ebba-3094-4d44-b58e-8378134e1be8" Jan 21 23:37:20.099707 kubelet[3515]: E0121 23:37:20.099657 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-spv6f" podUID="9c98ebba-3094-4d44-b58e-8378134e1be8" Jan 21 23:37:20.397103 containerd[2003]: time="2026-01-21T23:37:20.396748766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:37:20.399857 containerd[2003]: time="2026-01-21T23:37:20.399723961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 21 23:37:20.400876 containerd[2003]: time="2026-01-21T23:37:20.400835133Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:37:20.403932 containerd[2003]: time="2026-01-21T23:37:20.403882353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:37:20.405424 containerd[2003]: time="2026-01-21T23:37:20.405365027Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.043015536s" Jan 21 23:37:20.405540 containerd[2003]: time="2026-01-21T23:37:20.405421087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 21 23:37:20.411202 containerd[2003]: time="2026-01-21T23:37:20.410953464Z" level=info msg="CreateContainer within sandbox \"e85b974322570a0f75f1ed5127d457997918e814fad19f38db4758482d922b11\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 21 23:37:20.427301 containerd[2003]: time="2026-01-21T23:37:20.427231888Z" level=info msg="Container a2a7b88d62dd310ef88d88a2684733b20350971fe45ea9098631b6cab371ff74: CDI devices from CRI Config.CDIDevices: []" Jan 21 23:37:20.446258 containerd[2003]: time="2026-01-21T23:37:20.446199428Z" level=info msg="CreateContainer within sandbox \"e85b974322570a0f75f1ed5127d457997918e814fad19f38db4758482d922b11\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a2a7b88d62dd310ef88d88a2684733b20350971fe45ea9098631b6cab371ff74\"" Jan 21 23:37:20.447662 containerd[2003]: time="2026-01-21T23:37:20.447586546Z" level=info msg="StartContainer for \"a2a7b88d62dd310ef88d88a2684733b20350971fe45ea9098631b6cab371ff74\"" Jan 21 23:37:20.451934 containerd[2003]: time="2026-01-21T23:37:20.451848914Z" level=info msg="connecting to shim a2a7b88d62dd310ef88d88a2684733b20350971fe45ea9098631b6cab371ff74" address="unix:///run/containerd/s/be305f90218e2dfa69180a37e29f01fd4127a72dede4f032b35994f0ce313da0" protocol=ttrpc version=3 Jan 21 23:37:20.493314 systemd[1]: Started cri-containerd-a2a7b88d62dd310ef88d88a2684733b20350971fe45ea9098631b6cab371ff74.scope - libcontainer container a2a7b88d62dd310ef88d88a2684733b20350971fe45ea9098631b6cab371ff74. Jan 21 23:37:20.586000 audit: BPF prog-id=176 op=LOAD Jan 21 23:37:20.588957 kernel: kauditd_printk_skb: 96 callbacks suppressed Jan 21 23:37:20.589061 kernel: audit: type=1334 audit(1769038640.586:587): prog-id=176 op=LOAD Jan 21 23:37:20.586000 audit[4297]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400018c3e8 a2=98 a3=0 items=0 ppid=4069 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:20.596508 kernel: audit: type=1300 audit(1769038640.586:587): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400018c3e8 a2=98 a3=0 items=0 ppid=4069 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:20.596615 kernel: audit: type=1327 audit(1769038640.586:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613762383864363264643331306566383864383861323638343733 Jan 21 23:37:20.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613762383864363264643331306566383864383861323638343733 Jan 21 23:37:20.602344 kernel: audit: type=1334 audit(1769038640.587:588): prog-id=177 op=LOAD Jan 21 23:37:20.587000 audit: BPF prog-id=177 op=LOAD Jan 21 23:37:20.587000 audit[4297]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=400018c168 a2=98 a3=0 items=0 ppid=4069 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:20.610022 kernel: audit: type=1300 audit(1769038640.587:588): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=400018c168 a2=98 a3=0 items=0 ppid=4069 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:20.610093 kernel: audit: type=1327 audit(1769038640.587:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613762383864363264643331306566383864383861323638343733 Jan 21 23:37:20.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613762383864363264643331306566383864383861323638343733 Jan 21 23:37:20.589000 audit: BPF prog-id=177 op=UNLOAD Jan 21 23:37:20.617519 kernel: audit: type=1334 audit(1769038640.589:589): prog-id=177 op=UNLOAD Jan 21 23:37:20.589000 audit[4297]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4069 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:20.589000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613762383864363264643331306566383864383861323638343733 Jan 21 23:37:20.632688 kernel: audit: type=1300 audit(1769038640.589:589): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4069 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:20.632839 kernel: audit: type=1327 audit(1769038640.589:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613762383864363264643331306566383864383861323638343733 Jan 21 23:37:20.589000 audit: BPF prog-id=176 op=UNLOAD Jan 21 23:37:20.635262 kernel: audit: type=1334 audit(1769038640.589:590): prog-id=176 op=UNLOAD Jan 21 23:37:20.589000 audit[4297]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4069 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:20.589000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613762383864363264643331306566383864383861323638343733 Jan 21 23:37:20.589000 audit: BPF prog-id=178 op=LOAD Jan 21 23:37:20.589000 audit[4297]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400018c648 a2=98 a3=0 items=0 ppid=4069 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:20.589000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613762383864363264643331306566383864383861323638343733 Jan 21 23:37:20.674477 containerd[2003]: time="2026-01-21T23:37:20.674244937Z" level=info msg="StartContainer for \"a2a7b88d62dd310ef88d88a2684733b20350971fe45ea9098631b6cab371ff74\" returns successfully" Jan 21 23:37:21.680308 containerd[2003]: time="2026-01-21T23:37:21.680227873Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 21 23:37:21.684185 systemd[1]: cri-containerd-a2a7b88d62dd310ef88d88a2684733b20350971fe45ea9098631b6cab371ff74.scope: Deactivated successfully. Jan 21 23:37:21.685040 systemd[1]: cri-containerd-a2a7b88d62dd310ef88d88a2684733b20350971fe45ea9098631b6cab371ff74.scope: Consumed 1.006s CPU time, 185M memory peak, 165.9M written to disk. Jan 21 23:37:21.686000 audit: BPF prog-id=178 op=UNLOAD Jan 21 23:37:21.691847 containerd[2003]: time="2026-01-21T23:37:21.691714659Z" level=info msg="received container exit event container_id:\"a2a7b88d62dd310ef88d88a2684733b20350971fe45ea9098631b6cab371ff74\" id:\"a2a7b88d62dd310ef88d88a2684733b20350971fe45ea9098631b6cab371ff74\" pid:4310 exited_at:{seconds:1769038641 nanos:690785627}" Jan 21 23:37:21.736780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2a7b88d62dd310ef88d88a2684733b20350971fe45ea9098631b6cab371ff74-rootfs.mount: Deactivated successfully. Jan 21 23:37:21.744338 kubelet[3515]: I0121 23:37:21.744278 3515 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 21 23:37:21.828777 kubelet[3515]: I0121 23:37:21.828707 3515 status_manager.go:890] "Failed to get status for pod" podUID="62aba3b1-39b5-4efd-90ba-082af4dc8ffb" pod="kube-system/coredns-668d6bf9bc-rzmxg" err="pods \"coredns-668d6bf9bc-rzmxg\" is forbidden: User \"system:node:ip-172-31-29-34\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-29-34' and this object" Jan 21 23:37:21.834409 kubelet[3515]: W0121 23:37:21.834347 3515 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-29-34" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-34' and this object Jan 21 23:37:21.834571 kubelet[3515]: E0121 23:37:21.834419 3515 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ip-172-31-29-34\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-29-34' and this object" logger="UnhandledError" Jan 21 23:37:21.839697 systemd[1]: Created slice kubepods-burstable-pod62aba3b1_39b5_4efd_90ba_082af4dc8ffb.slice - libcontainer container kubepods-burstable-pod62aba3b1_39b5_4efd_90ba_082af4dc8ffb.slice. Jan 21 23:37:21.856632 kubelet[3515]: I0121 23:37:21.855645 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62aba3b1-39b5-4efd-90ba-082af4dc8ffb-config-volume\") pod \"coredns-668d6bf9bc-rzmxg\" (UID: \"62aba3b1-39b5-4efd-90ba-082af4dc8ffb\") " pod="kube-system/coredns-668d6bf9bc-rzmxg" Jan 21 23:37:21.856632 kubelet[3515]: I0121 23:37:21.855735 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bkmb\" (UniqueName: \"kubernetes.io/projected/ca6e1be7-3778-4e58-b701-59e16c774819-kube-api-access-9bkmb\") pod \"calico-kube-controllers-8c7587b9f-qxd5v\" (UID: \"ca6e1be7-3778-4e58-b701-59e16c774819\") " pod="calico-system/calico-kube-controllers-8c7587b9f-qxd5v" Jan 21 23:37:21.856632 kubelet[3515]: I0121 23:37:21.855781 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrr6b\" (UniqueName: \"kubernetes.io/projected/20617c9b-94b4-4cb3-a6b6-dc13407eb549-kube-api-access-zrr6b\") pod \"calico-apiserver-84667c98fc-fz867\" (UID: \"20617c9b-94b4-4cb3-a6b6-dc13407eb549\") " pod="calico-apiserver/calico-apiserver-84667c98fc-fz867" Jan 21 23:37:21.856632 kubelet[3515]: I0121 23:37:21.855818 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eed98923-e571-4f3e-9b7a-fa237350831a-config-volume\") pod \"coredns-668d6bf9bc-glnmd\" (UID: \"eed98923-e571-4f3e-9b7a-fa237350831a\") " pod="kube-system/coredns-668d6bf9bc-glnmd" Jan 21 23:37:21.856632 kubelet[3515]: I0121 23:37:21.855861 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca6e1be7-3778-4e58-b701-59e16c774819-tigera-ca-bundle\") pod \"calico-kube-controllers-8c7587b9f-qxd5v\" (UID: \"ca6e1be7-3778-4e58-b701-59e16c774819\") " pod="calico-system/calico-kube-controllers-8c7587b9f-qxd5v" Jan 21 23:37:21.857129 kubelet[3515]: I0121 23:37:21.855910 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmp8h\" (UniqueName: \"kubernetes.io/projected/62aba3b1-39b5-4efd-90ba-082af4dc8ffb-kube-api-access-bmp8h\") pod \"coredns-668d6bf9bc-rzmxg\" (UID: \"62aba3b1-39b5-4efd-90ba-082af4dc8ffb\") " pod="kube-system/coredns-668d6bf9bc-rzmxg" Jan 21 23:37:21.857129 kubelet[3515]: I0121 23:37:21.855959 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/20617c9b-94b4-4cb3-a6b6-dc13407eb549-calico-apiserver-certs\") pod \"calico-apiserver-84667c98fc-fz867\" (UID: \"20617c9b-94b4-4cb3-a6b6-dc13407eb549\") " pod="calico-apiserver/calico-apiserver-84667c98fc-fz867" Jan 21 23:37:21.857470 kubelet[3515]: I0121 23:37:21.857427 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw2lg\" (UniqueName: \"kubernetes.io/projected/eed98923-e571-4f3e-9b7a-fa237350831a-kube-api-access-vw2lg\") pod \"coredns-668d6bf9bc-glnmd\" (UID: \"eed98923-e571-4f3e-9b7a-fa237350831a\") " pod="kube-system/coredns-668d6bf9bc-glnmd" Jan 21 23:37:21.877950 systemd[1]: Created slice kubepods-besteffort-podca6e1be7_3778_4e58_b701_59e16c774819.slice - libcontainer container kubepods-besteffort-podca6e1be7_3778_4e58_b701_59e16c774819.slice. Jan 21 23:37:21.906855 systemd[1]: Created slice kubepods-burstable-podeed98923_e571_4f3e_9b7a_fa237350831a.slice - libcontainer container kubepods-burstable-podeed98923_e571_4f3e_9b7a_fa237350831a.slice. Jan 21 23:37:21.941146 systemd[1]: Created slice kubepods-besteffort-pod20617c9b_94b4_4cb3_a6b6_dc13407eb549.slice - libcontainer container kubepods-besteffort-pod20617c9b_94b4_4cb3_a6b6_dc13407eb549.slice. Jan 21 23:37:21.959683 kubelet[3515]: I0121 23:37:21.959551 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9be5f6a-b238-4709-b078-d405c449b532-goldmane-ca-bundle\") pod \"goldmane-666569f655-jq2tn\" (UID: \"c9be5f6a-b238-4709-b078-d405c449b532\") " pod="calico-system/goldmane-666569f655-jq2tn" Jan 21 23:37:21.959986 kubelet[3515]: I0121 23:37:21.959938 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdxpq\" (UniqueName: \"kubernetes.io/projected/c9be5f6a-b238-4709-b078-d405c449b532-kube-api-access-xdxpq\") pod \"goldmane-666569f655-jq2tn\" (UID: \"c9be5f6a-b238-4709-b078-d405c449b532\") " pod="calico-system/goldmane-666569f655-jq2tn" Jan 21 23:37:21.960246 kubelet[3515]: I0121 23:37:21.960096 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4da6df64-dea8-43e7-aa8f-e2a9da218238-whisker-ca-bundle\") pod \"whisker-5bd44b66fc-z25sm\" (UID: \"4da6df64-dea8-43e7-aa8f-e2a9da218238\") " pod="calico-system/whisker-5bd44b66fc-z25sm" Jan 21 23:37:21.960375 systemd[1]: Created slice kubepods-besteffort-podde79900c_2c4b_48e5_9995_14ed014509c5.slice - libcontainer container kubepods-besteffort-podde79900c_2c4b_48e5_9995_14ed014509c5.slice. Jan 21 23:37:21.963544 kubelet[3515]: I0121 23:37:21.962079 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4da6df64-dea8-43e7-aa8f-e2a9da218238-whisker-backend-key-pair\") pod \"whisker-5bd44b66fc-z25sm\" (UID: \"4da6df64-dea8-43e7-aa8f-e2a9da218238\") " pod="calico-system/whisker-5bd44b66fc-z25sm" Jan 21 23:37:21.963544 kubelet[3515]: I0121 23:37:21.963098 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9be5f6a-b238-4709-b078-d405c449b532-config\") pod \"goldmane-666569f655-jq2tn\" (UID: \"c9be5f6a-b238-4709-b078-d405c449b532\") " pod="calico-system/goldmane-666569f655-jq2tn" Jan 21 23:37:21.963544 kubelet[3515]: I0121 23:37:21.963362 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c9be5f6a-b238-4709-b078-d405c449b532-goldmane-key-pair\") pod \"goldmane-666569f655-jq2tn\" (UID: \"c9be5f6a-b238-4709-b078-d405c449b532\") " pod="calico-system/goldmane-666569f655-jq2tn" Jan 21 23:37:21.963544 kubelet[3515]: I0121 23:37:21.963455 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/920c851c-0448-41e7-8aac-ea1379198aa5-calico-apiserver-certs\") pod \"calico-apiserver-5468c6d76d-ffj6z\" (UID: \"920c851c-0448-41e7-8aac-ea1379198aa5\") " pod="calico-apiserver/calico-apiserver-5468c6d76d-ffj6z" Jan 21 23:37:21.963996 kubelet[3515]: I0121 23:37:21.963906 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d8fl\" (UniqueName: \"kubernetes.io/projected/de79900c-2c4b-48e5-9995-14ed014509c5-kube-api-access-8d8fl\") pod \"calico-apiserver-84667c98fc-g2rqz\" (UID: \"de79900c-2c4b-48e5-9995-14ed014509c5\") " pod="calico-apiserver/calico-apiserver-84667c98fc-g2rqz" Jan 21 23:37:21.964140 kubelet[3515]: I0121 23:37:21.964111 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8bfc\" (UniqueName: \"kubernetes.io/projected/920c851c-0448-41e7-8aac-ea1379198aa5-kube-api-access-b8bfc\") pod \"calico-apiserver-5468c6d76d-ffj6z\" (UID: \"920c851c-0448-41e7-8aac-ea1379198aa5\") " pod="calico-apiserver/calico-apiserver-5468c6d76d-ffj6z" Jan 21 23:37:21.965837 kubelet[3515]: I0121 23:37:21.965168 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7lw8\" (UniqueName: \"kubernetes.io/projected/4da6df64-dea8-43e7-aa8f-e2a9da218238-kube-api-access-b7lw8\") pod \"whisker-5bd44b66fc-z25sm\" (UID: \"4da6df64-dea8-43e7-aa8f-e2a9da218238\") " pod="calico-system/whisker-5bd44b66fc-z25sm" Jan 21 23:37:21.968362 kubelet[3515]: I0121 23:37:21.966131 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/de79900c-2c4b-48e5-9995-14ed014509c5-calico-apiserver-certs\") pod \"calico-apiserver-84667c98fc-g2rqz\" (UID: \"de79900c-2c4b-48e5-9995-14ed014509c5\") " pod="calico-apiserver/calico-apiserver-84667c98fc-g2rqz" Jan 21 23:37:22.014733 systemd[1]: Created slice kubepods-besteffort-pod4da6df64_dea8_43e7_aa8f_e2a9da218238.slice - libcontainer container kubepods-besteffort-pod4da6df64_dea8_43e7_aa8f_e2a9da218238.slice. Jan 21 23:37:22.145363 systemd[1]: Created slice kubepods-besteffort-pod920c851c_0448_41e7_8aac_ea1379198aa5.slice - libcontainer container kubepods-besteffort-pod920c851c_0448_41e7_8aac_ea1379198aa5.slice. Jan 21 23:37:22.175253 systemd[1]: Created slice kubepods-besteffort-podc9be5f6a_b238_4709_b078_d405c449b532.slice - libcontainer container kubepods-besteffort-podc9be5f6a_b238_4709_b078_d405c449b532.slice. Jan 21 23:37:22.199159 containerd[2003]: time="2026-01-21T23:37:22.198286276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8c7587b9f-qxd5v,Uid:ca6e1be7-3778-4e58-b701-59e16c774819,Namespace:calico-system,Attempt:0,}" Jan 21 23:37:22.203505 containerd[2003]: time="2026-01-21T23:37:22.203336001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jq2tn,Uid:c9be5f6a-b238-4709-b078-d405c449b532,Namespace:calico-system,Attempt:0,}" Jan 21 23:37:22.213027 systemd[1]: Created slice kubepods-besteffort-pod9c98ebba_3094_4d44_b58e_8378134e1be8.slice - libcontainer container kubepods-besteffort-pod9c98ebba_3094_4d44_b58e_8378134e1be8.slice. Jan 21 23:37:22.223373 containerd[2003]: time="2026-01-21T23:37:22.222500015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spv6f,Uid:9c98ebba-3094-4d44-b58e-8378134e1be8,Namespace:calico-system,Attempt:0,}" Jan 21 23:37:22.276668 containerd[2003]: time="2026-01-21T23:37:22.276603270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84667c98fc-fz867,Uid:20617c9b-94b4-4cb3-a6b6-dc13407eb549,Namespace:calico-apiserver,Attempt:0,}" Jan 21 23:37:22.287954 containerd[2003]: time="2026-01-21T23:37:22.287877414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84667c98fc-g2rqz,Uid:de79900c-2c4b-48e5-9995-14ed014509c5,Namespace:calico-apiserver,Attempt:0,}" Jan 21 23:37:22.335833 containerd[2003]: time="2026-01-21T23:37:22.335784556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bd44b66fc-z25sm,Uid:4da6df64-dea8-43e7-aa8f-e2a9da218238,Namespace:calico-system,Attempt:0,}" Jan 21 23:37:22.419904 containerd[2003]: time="2026-01-21T23:37:22.418115259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 21 23:37:22.465063 containerd[2003]: time="2026-01-21T23:37:22.464479409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5468c6d76d-ffj6z,Uid:920c851c-0448-41e7-8aac-ea1379198aa5,Namespace:calico-apiserver,Attempt:0,}" Jan 21 23:37:22.834426 containerd[2003]: time="2026-01-21T23:37:22.834362736Z" level=error msg="Failed to destroy network for sandbox \"d9cb3a495e190399ba85234194d0cd4450b3966a4d2200b55b933c0dc218ba6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:22.845617 systemd[1]: run-netns-cni\x2df179ae73\x2d10e3\x2da960\x2d4064\x2d2b313a2c8558.mount: Deactivated successfully. Jan 21 23:37:22.852490 containerd[2003]: time="2026-01-21T23:37:22.850947152Z" level=error msg="Failed to destroy network for sandbox \"b781c7658e016d4031c1ab3c6cdf7a8eea4d4413128657fd9a74986e564bea2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:22.853588 containerd[2003]: time="2026-01-21T23:37:22.853179307Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jq2tn,Uid:c9be5f6a-b238-4709-b078-d405c449b532,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9cb3a495e190399ba85234194d0cd4450b3966a4d2200b55b933c0dc218ba6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:22.856015 kubelet[3515]: E0121 23:37:22.854810 3515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9cb3a495e190399ba85234194d0cd4450b3966a4d2200b55b933c0dc218ba6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:22.856015 kubelet[3515]: E0121 23:37:22.854907 3515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9cb3a495e190399ba85234194d0cd4450b3966a4d2200b55b933c0dc218ba6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-jq2tn" Jan 21 23:37:22.856015 kubelet[3515]: E0121 23:37:22.854941 3515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9cb3a495e190399ba85234194d0cd4450b3966a4d2200b55b933c0dc218ba6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-jq2tn" Jan 21 23:37:22.857492 kubelet[3515]: E0121 23:37:22.857418 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-jq2tn_calico-system(c9be5f6a-b238-4709-b078-d405c449b532)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-jq2tn_calico-system(c9be5f6a-b238-4709-b078-d405c449b532)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9cb3a495e190399ba85234194d0cd4450b3966a4d2200b55b933c0dc218ba6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-jq2tn" podUID="c9be5f6a-b238-4709-b078-d405c449b532" Jan 21 23:37:22.863477 systemd[1]: run-netns-cni\x2d19c1dc4f\x2def22\x2de932\x2d0460\x2de1a93f8314a2.mount: Deactivated successfully. Jan 21 23:37:22.872399 containerd[2003]: time="2026-01-21T23:37:22.871748814Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8c7587b9f-qxd5v,Uid:ca6e1be7-3778-4e58-b701-59e16c774819,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b781c7658e016d4031c1ab3c6cdf7a8eea4d4413128657fd9a74986e564bea2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:22.875062 kubelet[3515]: E0121 23:37:22.872124 3515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b781c7658e016d4031c1ab3c6cdf7a8eea4d4413128657fd9a74986e564bea2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:22.875062 kubelet[3515]: E0121 23:37:22.872205 3515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b781c7658e016d4031c1ab3c6cdf7a8eea4d4413128657fd9a74986e564bea2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8c7587b9f-qxd5v" Jan 21 23:37:22.875062 kubelet[3515]: E0121 23:37:22.872241 3515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b781c7658e016d4031c1ab3c6cdf7a8eea4d4413128657fd9a74986e564bea2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8c7587b9f-qxd5v" Jan 21 23:37:22.875583 kubelet[3515]: E0121 23:37:22.872332 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8c7587b9f-qxd5v_calico-system(ca6e1be7-3778-4e58-b701-59e16c774819)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8c7587b9f-qxd5v_calico-system(ca6e1be7-3778-4e58-b701-59e16c774819)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b781c7658e016d4031c1ab3c6cdf7a8eea4d4413128657fd9a74986e564bea2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8c7587b9f-qxd5v" podUID="ca6e1be7-3778-4e58-b701-59e16c774819" Jan 21 23:37:22.891676 containerd[2003]: time="2026-01-21T23:37:22.891459442Z" level=error msg="Failed to destroy network for sandbox \"1a7d895bea217b62d1861fcf6ac666d562ccb002e79166f3402230e5852174c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:22.895779 containerd[2003]: time="2026-01-21T23:37:22.895646871Z" level=error msg="Failed to destroy network for sandbox \"aea63fe11c2030def32adf1de3ccdd85c28726452bb82b39fc6807561138fd1d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:22.897730 containerd[2003]: time="2026-01-21T23:37:22.897664010Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spv6f,Uid:9c98ebba-3094-4d44-b58e-8378134e1be8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a7d895bea217b62d1861fcf6ac666d562ccb002e79166f3402230e5852174c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:22.899937 kubelet[3515]: E0121 23:37:22.899757 3515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a7d895bea217b62d1861fcf6ac666d562ccb002e79166f3402230e5852174c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:22.900474 kubelet[3515]: E0121 23:37:22.900259 3515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a7d895bea217b62d1861fcf6ac666d562ccb002e79166f3402230e5852174c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-spv6f" Jan 21 23:37:22.900474 kubelet[3515]: E0121 23:37:22.900416 3515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a7d895bea217b62d1861fcf6ac666d562ccb002e79166f3402230e5852174c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-spv6f" Jan 21 23:37:22.900522 systemd[1]: run-netns-cni\x2ddec20e40\x2d7671\x2d00f6\x2d5d5f\x2d8ecd300fdcef.mount: Deactivated successfully. Jan 21 23:37:22.904152 kubelet[3515]: E0121 23:37:22.900952 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-spv6f_calico-system(9c98ebba-3094-4d44-b58e-8378134e1be8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-spv6f_calico-system(9c98ebba-3094-4d44-b58e-8378134e1be8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a7d895bea217b62d1861fcf6ac666d562ccb002e79166f3402230e5852174c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-spv6f" podUID="9c98ebba-3094-4d44-b58e-8378134e1be8" Jan 21 23:37:22.910473 containerd[2003]: time="2026-01-21T23:37:22.910383383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bd44b66fc-z25sm,Uid:4da6df64-dea8-43e7-aa8f-e2a9da218238,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aea63fe11c2030def32adf1de3ccdd85c28726452bb82b39fc6807561138fd1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:22.911139 kubelet[3515]: E0121 23:37:22.911091 3515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aea63fe11c2030def32adf1de3ccdd85c28726452bb82b39fc6807561138fd1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:22.913078 kubelet[3515]: E0121 23:37:22.912217 3515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aea63fe11c2030def32adf1de3ccdd85c28726452bb82b39fc6807561138fd1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5bd44b66fc-z25sm" Jan 21 23:37:22.913078 kubelet[3515]: E0121 23:37:22.912304 3515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aea63fe11c2030def32adf1de3ccdd85c28726452bb82b39fc6807561138fd1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5bd44b66fc-z25sm" Jan 21 23:37:22.913078 kubelet[3515]: E0121 23:37:22.912403 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5bd44b66fc-z25sm_calico-system(4da6df64-dea8-43e7-aa8f-e2a9da218238)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5bd44b66fc-z25sm_calico-system(4da6df64-dea8-43e7-aa8f-e2a9da218238)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aea63fe11c2030def32adf1de3ccdd85c28726452bb82b39fc6807561138fd1d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5bd44b66fc-z25sm" podUID="4da6df64-dea8-43e7-aa8f-e2a9da218238" Jan 21 23:37:22.916730 systemd[1]: run-netns-cni\x2db229ef25\x2dde84\x2d62b4\x2dbe86\x2db7cbe313bee5.mount: Deactivated successfully. Jan 21 23:37:22.921458 containerd[2003]: time="2026-01-21T23:37:22.920781917Z" level=error msg="Failed to destroy network for sandbox \"2911967fd37b8f11c09132a8a7bed747b8b888d8c3d977a0dac117252391ed50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:22.926853 containerd[2003]: time="2026-01-21T23:37:22.926659937Z" level=error msg="Failed to destroy network for sandbox \"fc3a79bbd3a56c3a281978c071d18da0783349ee27901d1c2843c7b1e7db2f46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:22.932912 containerd[2003]: time="2026-01-21T23:37:22.932812067Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5468c6d76d-ffj6z,Uid:920c851c-0448-41e7-8aac-ea1379198aa5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc3a79bbd3a56c3a281978c071d18da0783349ee27901d1c2843c7b1e7db2f46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:22.934596 kubelet[3515]: E0121 23:37:22.933311 3515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc3a79bbd3a56c3a281978c071d18da0783349ee27901d1c2843c7b1e7db2f46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:22.934596 kubelet[3515]: E0121 23:37:22.933381 3515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc3a79bbd3a56c3a281978c071d18da0783349ee27901d1c2843c7b1e7db2f46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5468c6d76d-ffj6z" Jan 21 23:37:22.934596 kubelet[3515]: E0121 23:37:22.933415 3515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc3a79bbd3a56c3a281978c071d18da0783349ee27901d1c2843c7b1e7db2f46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5468c6d76d-ffj6z" Jan 21 23:37:22.936165 kubelet[3515]: E0121 23:37:22.933476 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5468c6d76d-ffj6z_calico-apiserver(920c851c-0448-41e7-8aac-ea1379198aa5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5468c6d76d-ffj6z_calico-apiserver(920c851c-0448-41e7-8aac-ea1379198aa5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc3a79bbd3a56c3a281978c071d18da0783349ee27901d1c2843c7b1e7db2f46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5468c6d76d-ffj6z" podUID="920c851c-0448-41e7-8aac-ea1379198aa5" Jan 21 23:37:22.937146 containerd[2003]: time="2026-01-21T23:37:22.936937439Z" level=error msg="Failed to destroy network for sandbox \"03cce5c67791a3c4c9d317125a2c06f956c3e0c09b864bc385fbf372525f6fd3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:22.939159 containerd[2003]: time="2026-01-21T23:37:22.939100929Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84667c98fc-g2rqz,Uid:de79900c-2c4b-48e5-9995-14ed014509c5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2911967fd37b8f11c09132a8a7bed747b8b888d8c3d977a0dac117252391ed50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:22.939764 kubelet[3515]: E0121 23:37:22.939664 3515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2911967fd37b8f11c09132a8a7bed747b8b888d8c3d977a0dac117252391ed50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:22.939942 kubelet[3515]: E0121 23:37:22.939816 3515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2911967fd37b8f11c09132a8a7bed747b8b888d8c3d977a0dac117252391ed50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84667c98fc-g2rqz" Jan 21 23:37:22.939942 kubelet[3515]: E0121 23:37:22.939881 3515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2911967fd37b8f11c09132a8a7bed747b8b888d8c3d977a0dac117252391ed50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84667c98fc-g2rqz" Jan 21 23:37:22.940793 kubelet[3515]: E0121 23:37:22.940113 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84667c98fc-g2rqz_calico-apiserver(de79900c-2c4b-48e5-9995-14ed014509c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84667c98fc-g2rqz_calico-apiserver(de79900c-2c4b-48e5-9995-14ed014509c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2911967fd37b8f11c09132a8a7bed747b8b888d8c3d977a0dac117252391ed50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84667c98fc-g2rqz" podUID="de79900c-2c4b-48e5-9995-14ed014509c5" Jan 21 23:37:22.942853 containerd[2003]: time="2026-01-21T23:37:22.942599939Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84667c98fc-fz867,Uid:20617c9b-94b4-4cb3-a6b6-dc13407eb549,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"03cce5c67791a3c4c9d317125a2c06f956c3e0c09b864bc385fbf372525f6fd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:22.944273 kubelet[3515]: E0121 23:37:22.944197 3515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03cce5c67791a3c4c9d317125a2c06f956c3e0c09b864bc385fbf372525f6fd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:22.944446 kubelet[3515]: E0121 23:37:22.944291 3515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03cce5c67791a3c4c9d317125a2c06f956c3e0c09b864bc385fbf372525f6fd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84667c98fc-fz867" Jan 21 23:37:22.944446 kubelet[3515]: E0121 23:37:22.944328 3515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03cce5c67791a3c4c9d317125a2c06f956c3e0c09b864bc385fbf372525f6fd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84667c98fc-fz867" Jan 21 23:37:22.944446 kubelet[3515]: E0121 23:37:22.944402 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84667c98fc-fz867_calico-apiserver(20617c9b-94b4-4cb3-a6b6-dc13407eb549)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84667c98fc-fz867_calico-apiserver(20617c9b-94b4-4cb3-a6b6-dc13407eb549)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03cce5c67791a3c4c9d317125a2c06f956c3e0c09b864bc385fbf372525f6fd3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84667c98fc-fz867" podUID="20617c9b-94b4-4cb3-a6b6-dc13407eb549" Jan 21 23:37:23.057528 containerd[2003]: time="2026-01-21T23:37:23.057275065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rzmxg,Uid:62aba3b1-39b5-4efd-90ba-082af4dc8ffb,Namespace:kube-system,Attempt:0,}" Jan 21 23:37:23.122527 containerd[2003]: time="2026-01-21T23:37:23.122391738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-glnmd,Uid:eed98923-e571-4f3e-9b7a-fa237350831a,Namespace:kube-system,Attempt:0,}" Jan 21 23:37:23.159368 containerd[2003]: time="2026-01-21T23:37:23.159203675Z" level=error msg="Failed to destroy network for sandbox \"988ade7b17b798a71c85f4e851fa91f297bb9708a393034d011485c493f9f6a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:23.162599 containerd[2003]: time="2026-01-21T23:37:23.162504292Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rzmxg,Uid:62aba3b1-39b5-4efd-90ba-082af4dc8ffb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"988ade7b17b798a71c85f4e851fa91f297bb9708a393034d011485c493f9f6a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:23.163180 kubelet[3515]: E0121 23:37:23.162839 3515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"988ade7b17b798a71c85f4e851fa91f297bb9708a393034d011485c493f9f6a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:23.163180 kubelet[3515]: E0121 23:37:23.162917 3515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"988ade7b17b798a71c85f4e851fa91f297bb9708a393034d011485c493f9f6a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rzmxg" Jan 21 23:37:23.163180 kubelet[3515]: E0121 23:37:23.162949 3515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"988ade7b17b798a71c85f4e851fa91f297bb9708a393034d011485c493f9f6a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rzmxg" Jan 21 23:37:23.163609 kubelet[3515]: E0121 23:37:23.163465 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rzmxg_kube-system(62aba3b1-39b5-4efd-90ba-082af4dc8ffb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rzmxg_kube-system(62aba3b1-39b5-4efd-90ba-082af4dc8ffb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"988ade7b17b798a71c85f4e851fa91f297bb9708a393034d011485c493f9f6a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rzmxg" podUID="62aba3b1-39b5-4efd-90ba-082af4dc8ffb" Jan 21 23:37:23.230140 containerd[2003]: time="2026-01-21T23:37:23.230038896Z" level=error msg="Failed to destroy network for sandbox \"ade0e3481af6d4ff397a7dd317e4b10ce4bc09dea99107d56cc4e22e174d9584\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:23.233043 containerd[2003]: time="2026-01-21T23:37:23.232388449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-glnmd,Uid:eed98923-e571-4f3e-9b7a-fa237350831a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ade0e3481af6d4ff397a7dd317e4b10ce4bc09dea99107d56cc4e22e174d9584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:23.233224 kubelet[3515]: E0121 23:37:23.232733 3515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ade0e3481af6d4ff397a7dd317e4b10ce4bc09dea99107d56cc4e22e174d9584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 23:37:23.233224 kubelet[3515]: E0121 23:37:23.232808 3515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ade0e3481af6d4ff397a7dd317e4b10ce4bc09dea99107d56cc4e22e174d9584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-glnmd" Jan 21 23:37:23.233224 kubelet[3515]: E0121 23:37:23.232843 3515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ade0e3481af6d4ff397a7dd317e4b10ce4bc09dea99107d56cc4e22e174d9584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-glnmd" Jan 21 23:37:23.235571 kubelet[3515]: E0121 23:37:23.232911 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-glnmd_kube-system(eed98923-e571-4f3e-9b7a-fa237350831a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-glnmd_kube-system(eed98923-e571-4f3e-9b7a-fa237350831a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ade0e3481af6d4ff397a7dd317e4b10ce4bc09dea99107d56cc4e22e174d9584\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-glnmd" podUID="eed98923-e571-4f3e-9b7a-fa237350831a" Jan 21 23:37:23.740735 systemd[1]: run-netns-cni\x2df531a7d8\x2d8008\x2dec39\x2d822d\x2d9fdf0525a27e.mount: Deactivated successfully. Jan 21 23:37:23.745860 systemd[1]: run-netns-cni\x2d7c569ebe\x2dd5db\x2d35b1\x2d81ef\x2dd76526de9c15.mount: Deactivated successfully. Jan 21 23:37:23.746116 systemd[1]: run-netns-cni\x2d2a4219c8\x2db835\x2dd128\x2d83f0\x2d75ee34c45c81.mount: Deactivated successfully. Jan 21 23:37:29.094303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount92009114.mount: Deactivated successfully. Jan 21 23:37:29.144997 containerd[2003]: time="2026-01-21T23:37:29.144924368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:37:29.146400 containerd[2003]: time="2026-01-21T23:37:29.146297717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 21 23:37:29.148034 containerd[2003]: time="2026-01-21T23:37:29.147705021Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:37:29.152338 containerd[2003]: time="2026-01-21T23:37:29.152251586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 23:37:29.153842 containerd[2003]: time="2026-01-21T23:37:29.153776060Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.735288615s" Jan 21 23:37:29.154129 containerd[2003]: time="2026-01-21T23:37:29.154084006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 21 23:37:29.182640 containerd[2003]: time="2026-01-21T23:37:29.182578403Z" level=info msg="CreateContainer within sandbox \"e85b974322570a0f75f1ed5127d457997918e814fad19f38db4758482d922b11\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 21 23:37:29.205074 containerd[2003]: time="2026-01-21T23:37:29.203157685Z" level=info msg="Container d14393d9cd27670d6d75b818b749a71cdef840e87a49fc29ac6aee415f2e8342: CDI devices from CRI Config.CDIDevices: []" Jan 21 23:37:29.225334 containerd[2003]: time="2026-01-21T23:37:29.225246747Z" level=info msg="CreateContainer within sandbox \"e85b974322570a0f75f1ed5127d457997918e814fad19f38db4758482d922b11\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d14393d9cd27670d6d75b818b749a71cdef840e87a49fc29ac6aee415f2e8342\"" Jan 21 23:37:29.226916 containerd[2003]: time="2026-01-21T23:37:29.226837247Z" level=info msg="StartContainer for \"d14393d9cd27670d6d75b818b749a71cdef840e87a49fc29ac6aee415f2e8342\"" Jan 21 23:37:29.232957 containerd[2003]: time="2026-01-21T23:37:29.232810175Z" level=info msg="connecting to shim d14393d9cd27670d6d75b818b749a71cdef840e87a49fc29ac6aee415f2e8342" address="unix:///run/containerd/s/be305f90218e2dfa69180a37e29f01fd4127a72dede4f032b35994f0ce313da0" protocol=ttrpc version=3 Jan 21 23:37:29.284597 systemd[1]: Started cri-containerd-d14393d9cd27670d6d75b818b749a71cdef840e87a49fc29ac6aee415f2e8342.scope - libcontainer container d14393d9cd27670d6d75b818b749a71cdef840e87a49fc29ac6aee415f2e8342. Jan 21 23:37:29.378433 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 21 23:37:29.378575 kernel: audit: type=1334 audit(1769038649.375:593): prog-id=179 op=LOAD Jan 21 23:37:29.375000 audit: BPF prog-id=179 op=LOAD Jan 21 23:37:29.375000 audit[4592]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40002283e8 a2=98 a3=0 items=0 ppid=4069 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:29.386490 kernel: audit: type=1300 audit(1769038649.375:593): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40002283e8 a2=98 a3=0 items=0 ppid=4069 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:29.386655 kernel: audit: type=1327 audit(1769038649.375:593): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431343339336439636432373637306436643735623831386237343961 Jan 21 23:37:29.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431343339336439636432373637306436643735623831386237343961 Jan 21 23:37:29.392614 kernel: audit: type=1334 audit(1769038649.375:594): prog-id=180 op=LOAD Jan 21 23:37:29.375000 audit: BPF prog-id=180 op=LOAD Jan 21 23:37:29.400072 kernel: audit: type=1300 audit(1769038649.375:594): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000228168 a2=98 a3=0 items=0 ppid=4069 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:29.375000 audit[4592]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000228168 a2=98 a3=0 items=0 ppid=4069 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:29.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431343339336439636432373637306436643735623831386237343961 Jan 21 23:37:29.407346 kernel: audit: type=1327 audit(1769038649.375:594): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431343339336439636432373637306436643735623831386237343961 Jan 21 23:37:29.410128 kernel: audit: type=1334 audit(1769038649.378:595): prog-id=180 op=UNLOAD Jan 21 23:37:29.378000 audit: BPF prog-id=180 op=UNLOAD Jan 21 23:37:29.378000 audit[4592]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4069 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:29.417568 kernel: audit: type=1300 audit(1769038649.378:595): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4069 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:29.378000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431343339336439636432373637306436643735623831386237343961 Jan 21 23:37:29.423900 kernel: audit: type=1327 audit(1769038649.378:595): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431343339336439636432373637306436643735623831386237343961 Jan 21 23:37:29.378000 audit: BPF prog-id=179 op=UNLOAD Jan 21 23:37:29.378000 audit[4592]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4069 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:29.429091 kernel: audit: type=1334 audit(1769038649.378:596): prog-id=179 op=UNLOAD Jan 21 23:37:29.378000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431343339336439636432373637306436643735623831386237343961 Jan 21 23:37:29.378000 audit: BPF prog-id=181 op=LOAD Jan 21 23:37:29.378000 audit[4592]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000228648 a2=98 a3=0 items=0 ppid=4069 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:29.378000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431343339336439636432373637306436643735623831386237343961 Jan 21 23:37:29.490098 containerd[2003]: time="2026-01-21T23:37:29.488497356Z" level=info msg="StartContainer for \"d14393d9cd27670d6d75b818b749a71cdef840e87a49fc29ac6aee415f2e8342\" returns successfully" Jan 21 23:37:29.874987 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 21 23:37:29.875128 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 21 23:37:30.152158 kubelet[3515]: I0121 23:37:30.151946 3515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4da6df64-dea8-43e7-aa8f-e2a9da218238-whisker-backend-key-pair\") pod \"4da6df64-dea8-43e7-aa8f-e2a9da218238\" (UID: \"4da6df64-dea8-43e7-aa8f-e2a9da218238\") " Jan 21 23:37:30.152158 kubelet[3515]: I0121 23:37:30.152091 3515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4da6df64-dea8-43e7-aa8f-e2a9da218238-whisker-ca-bundle\") pod \"4da6df64-dea8-43e7-aa8f-e2a9da218238\" (UID: \"4da6df64-dea8-43e7-aa8f-e2a9da218238\") " Jan 21 23:37:30.152158 kubelet[3515]: I0121 23:37:30.152136 3515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7lw8\" (UniqueName: \"kubernetes.io/projected/4da6df64-dea8-43e7-aa8f-e2a9da218238-kube-api-access-b7lw8\") pod \"4da6df64-dea8-43e7-aa8f-e2a9da218238\" (UID: \"4da6df64-dea8-43e7-aa8f-e2a9da218238\") " Jan 21 23:37:30.156855 kubelet[3515]: I0121 23:37:30.155501 3515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4da6df64-dea8-43e7-aa8f-e2a9da218238-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4da6df64-dea8-43e7-aa8f-e2a9da218238" (UID: "4da6df64-dea8-43e7-aa8f-e2a9da218238"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 23:37:30.166139 kubelet[3515]: I0121 23:37:30.166077 3515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4da6df64-dea8-43e7-aa8f-e2a9da218238-kube-api-access-b7lw8" (OuterVolumeSpecName: "kube-api-access-b7lw8") pod "4da6df64-dea8-43e7-aa8f-e2a9da218238" (UID: "4da6df64-dea8-43e7-aa8f-e2a9da218238"). InnerVolumeSpecName "kube-api-access-b7lw8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 23:37:30.168050 systemd[1]: var-lib-kubelet-pods-4da6df64\x2ddea8\x2d43e7\x2daa8f\x2de2a9da218238-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db7lw8.mount: Deactivated successfully. Jan 21 23:37:30.176288 kubelet[3515]: I0121 23:37:30.176211 3515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4da6df64-dea8-43e7-aa8f-e2a9da218238-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4da6df64-dea8-43e7-aa8f-e2a9da218238" (UID: "4da6df64-dea8-43e7-aa8f-e2a9da218238"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 23:37:30.181021 systemd[1]: var-lib-kubelet-pods-4da6df64\x2ddea8\x2d43e7\x2daa8f\x2de2a9da218238-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 21 23:37:30.252900 kubelet[3515]: I0121 23:37:30.252823 3515 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4da6df64-dea8-43e7-aa8f-e2a9da218238-whisker-backend-key-pair\") on node \"ip-172-31-29-34\" DevicePath \"\"" Jan 21 23:37:30.252900 kubelet[3515]: I0121 23:37:30.252877 3515 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4da6df64-dea8-43e7-aa8f-e2a9da218238-whisker-ca-bundle\") on node \"ip-172-31-29-34\" DevicePath \"\"" Jan 21 23:37:30.252900 kubelet[3515]: I0121 23:37:30.252903 3515 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b7lw8\" (UniqueName: \"kubernetes.io/projected/4da6df64-dea8-43e7-aa8f-e2a9da218238-kube-api-access-b7lw8\") on node \"ip-172-31-29-34\" DevicePath \"\"" Jan 21 23:37:30.516950 systemd[1]: Removed slice kubepods-besteffort-pod4da6df64_dea8_43e7_aa8f_e2a9da218238.slice - libcontainer container kubepods-besteffort-pod4da6df64_dea8_43e7_aa8f_e2a9da218238.slice. Jan 21 23:37:30.544996 kubelet[3515]: I0121 23:37:30.544861 3515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5xxg7" podStartSLOduration=2.642199506 podStartE2EDuration="18.544830984s" podCreationTimestamp="2026-01-21 23:37:12 +0000 UTC" firstStartedPulling="2026-01-21 23:37:13.253070374 +0000 UTC m=+39.555538403" lastFinishedPulling="2026-01-21 23:37:29.155701864 +0000 UTC m=+55.458169881" observedRunningTime="2026-01-21 23:37:30.54465881 +0000 UTC m=+56.847126827" watchObservedRunningTime="2026-01-21 23:37:30.544830984 +0000 UTC m=+56.847299001" Jan 21 23:37:30.654616 kubelet[3515]: I0121 23:37:30.654292 3515 status_manager.go:890] "Failed to get status for pod" podUID="b82636a0-c786-4a73-90fe-e05d2e69a656" pod="calico-system/whisker-64b9b9d79c-64kd2" err="pods \"whisker-64b9b9d79c-64kd2\" is forbidden: User \"system:node:ip-172-31-29-34\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-29-34' and this object" Jan 21 23:37:30.656138 kubelet[3515]: W0121 23:37:30.654673 3515 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ip-172-31-29-34" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-29-34' and this object Jan 21 23:37:30.656138 kubelet[3515]: E0121 23:37:30.654863 3515 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ip-172-31-29-34\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-29-34' and this object" logger="UnhandledError" Jan 21 23:37:30.656138 kubelet[3515]: W0121 23:37:30.654813 3515 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ip-172-31-29-34" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-29-34' and this object Jan 21 23:37:30.656138 kubelet[3515]: E0121 23:37:30.654909 3515 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ip-172-31-29-34\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-29-34' and this object" logger="UnhandledError" Jan 21 23:37:30.667386 systemd[1]: Created slice kubepods-besteffort-podb82636a0_c786_4a73_90fe_e05d2e69a656.slice - libcontainer container kubepods-besteffort-podb82636a0_c786_4a73_90fe_e05d2e69a656.slice. Jan 21 23:37:30.757035 kubelet[3515]: I0121 23:37:30.756911 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbxdj\" (UniqueName: \"kubernetes.io/projected/b82636a0-c786-4a73-90fe-e05d2e69a656-kube-api-access-bbxdj\") pod \"whisker-64b9b9d79c-64kd2\" (UID: \"b82636a0-c786-4a73-90fe-e05d2e69a656\") " pod="calico-system/whisker-64b9b9d79c-64kd2" Jan 21 23:37:30.757404 kubelet[3515]: I0121 23:37:30.757139 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b82636a0-c786-4a73-90fe-e05d2e69a656-whisker-backend-key-pair\") pod \"whisker-64b9b9d79c-64kd2\" (UID: \"b82636a0-c786-4a73-90fe-e05d2e69a656\") " pod="calico-system/whisker-64b9b9d79c-64kd2" Jan 21 23:37:30.757404 kubelet[3515]: I0121 23:37:30.757226 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b82636a0-c786-4a73-90fe-e05d2e69a656-whisker-ca-bundle\") pod \"whisker-64b9b9d79c-64kd2\" (UID: \"b82636a0-c786-4a73-90fe-e05d2e69a656\") " pod="calico-system/whisker-64b9b9d79c-64kd2" Jan 21 23:37:31.876490 containerd[2003]: time="2026-01-21T23:37:31.875621528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64b9b9d79c-64kd2,Uid:b82636a0-c786-4a73-90fe-e05d2e69a656,Namespace:calico-system,Attempt:0,}" Jan 21 23:37:32.107954 kubelet[3515]: I0121 23:37:32.107786 3515 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4da6df64-dea8-43e7-aa8f-e2a9da218238" path="/var/lib/kubelet/pods/4da6df64-dea8-43e7-aa8f-e2a9da218238/volumes" Jan 21 23:37:32.476000 audit: BPF prog-id=182 op=LOAD Jan 21 23:37:32.476000 audit[4815]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffee1af788 a2=98 a3=ffffee1af778 items=0 ppid=4703 pid=4815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.476000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 23:37:32.476000 audit: BPF prog-id=182 op=UNLOAD Jan 21 23:37:32.476000 audit[4815]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffee1af758 a3=0 items=0 ppid=4703 pid=4815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.476000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 23:37:32.477000 audit: BPF prog-id=183 op=LOAD Jan 21 23:37:32.477000 audit[4815]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffee1af638 a2=74 a3=95 items=0 ppid=4703 pid=4815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.477000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 23:37:32.477000 audit: BPF prog-id=183 op=UNLOAD Jan 21 23:37:32.477000 audit[4815]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4703 pid=4815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.477000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 23:37:32.477000 audit: BPF prog-id=184 op=LOAD Jan 21 23:37:32.477000 audit[4815]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffee1af668 a2=40 a3=ffffee1af698 items=0 ppid=4703 pid=4815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.477000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 23:37:32.477000 audit: BPF prog-id=184 op=UNLOAD Jan 21 23:37:32.477000 audit[4815]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffee1af698 items=0 ppid=4703 pid=4815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.477000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 23:37:32.480000 audit: BPF prog-id=185 op=LOAD Jan 21 23:37:32.480000 audit[4816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc67db9e8 a2=98 a3=ffffc67db9d8 items=0 ppid=4703 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.480000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:37:32.481000 audit: BPF prog-id=185 op=UNLOAD Jan 21 23:37:32.481000 audit[4816]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc67db9b8 a3=0 items=0 ppid=4703 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.481000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:37:32.481000 audit: BPF prog-id=186 op=LOAD Jan 21 23:37:32.481000 audit[4816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc67db678 a2=74 a3=95 items=0 ppid=4703 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.481000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:37:32.481000 audit: BPF prog-id=186 op=UNLOAD Jan 21 23:37:32.481000 audit[4816]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4703 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.481000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:37:32.481000 audit: BPF prog-id=187 op=LOAD Jan 21 23:37:32.481000 audit[4816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc67db6d8 a2=94 a3=2 items=0 ppid=4703 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.481000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:37:32.481000 audit: BPF prog-id=187 op=UNLOAD Jan 21 23:37:32.481000 audit[4816]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4703 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.481000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:37:32.891000 audit: BPF prog-id=188 op=LOAD Jan 21 23:37:32.891000 audit[4816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc67db698 a2=40 a3=ffffc67db6c8 items=0 ppid=4703 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.891000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:37:32.891000 audit: BPF prog-id=188 op=UNLOAD Jan 21 23:37:32.891000 audit[4816]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffc67db6c8 items=0 ppid=4703 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.891000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:37:32.919000 audit: BPF prog-id=189 op=LOAD Jan 21 23:37:32.919000 audit[4816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc67db6a8 a2=94 a3=4 items=0 ppid=4703 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.919000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:37:32.919000 audit: BPF prog-id=189 op=UNLOAD Jan 21 23:37:32.919000 audit[4816]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4703 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.919000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:37:32.919000 audit: BPF prog-id=190 op=LOAD Jan 21 23:37:32.919000 audit[4816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc67db4e8 a2=94 a3=5 items=0 ppid=4703 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.919000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:37:32.920000 audit: BPF prog-id=190 op=UNLOAD Jan 21 23:37:32.920000 audit[4816]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4703 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.920000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:37:32.920000 audit: BPF prog-id=191 op=LOAD Jan 21 23:37:32.920000 audit[4816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc67db718 a2=94 a3=6 items=0 ppid=4703 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.920000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:37:32.920000 audit: BPF prog-id=191 op=UNLOAD Jan 21 23:37:32.920000 audit[4816]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4703 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.920000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:37:32.920000 audit: BPF prog-id=192 op=LOAD Jan 21 23:37:32.920000 audit[4816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc67daee8 a2=94 a3=83 items=0 ppid=4703 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.920000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:37:32.921000 audit: BPF prog-id=193 op=LOAD Jan 21 23:37:32.921000 audit[4816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffc67daca8 a2=94 a3=2 items=0 ppid=4703 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.921000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:37:32.921000 audit: BPF prog-id=193 op=UNLOAD Jan 21 23:37:32.921000 audit[4816]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4703 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.921000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:37:32.922000 audit: BPF prog-id=192 op=UNLOAD Jan 21 23:37:32.922000 audit[4816]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=25c6c620 a3=25c5fb00 items=0 ppid=4703 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.922000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 23:37:32.954000 audit: BPF prog-id=194 op=LOAD Jan 21 23:37:32.954000 audit[4846]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd4c73a78 a2=98 a3=ffffd4c73a68 items=0 ppid=4703 pid=4846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.954000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 23:37:32.955000 audit: BPF prog-id=194 op=UNLOAD Jan 21 23:37:32.955000 audit[4846]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd4c73a48 a3=0 items=0 ppid=4703 pid=4846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.955000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 23:37:32.955000 audit: BPF prog-id=195 op=LOAD Jan 21 23:37:32.955000 audit[4846]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd4c73928 a2=74 a3=95 items=0 ppid=4703 pid=4846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.955000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 23:37:32.955000 audit: BPF prog-id=195 op=UNLOAD Jan 21 23:37:32.955000 audit[4846]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4703 pid=4846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.955000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 23:37:32.955000 audit: BPF prog-id=196 op=LOAD Jan 21 23:37:32.955000 audit[4846]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd4c73958 a2=40 a3=ffffd4c73988 items=0 ppid=4703 pid=4846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.955000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 23:37:32.955000 audit: BPF prog-id=196 op=UNLOAD Jan 21 23:37:32.955000 audit[4846]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffd4c73988 items=0 ppid=4703 pid=4846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:32.955000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 23:37:33.167150 (udev-worker)[4858]: Network interface NamePolicy= disabled on kernel command line. Jan 21 23:37:33.181301 systemd-networkd[1597]: vxlan.calico: Link UP Jan 21 23:37:33.181322 systemd-networkd[1597]: vxlan.calico: Gained carrier Jan 21 23:37:33.308084 (udev-worker)[4859]: Network interface NamePolicy= disabled on kernel command line. Jan 21 23:37:33.312000 audit: BPF prog-id=197 op=LOAD Jan 21 23:37:33.312000 audit[4884]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffddaea98 a2=98 a3=fffffddaea88 items=0 ppid=4703 pid=4884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.312000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:37:33.312000 audit: BPF prog-id=197 op=UNLOAD Jan 21 23:37:33.312000 audit[4884]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffffddaea68 a3=0 items=0 ppid=4703 pid=4884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.312000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:37:33.312000 audit: BPF prog-id=198 op=LOAD Jan 21 23:37:33.312000 audit[4884]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffddae778 a2=74 a3=95 items=0 ppid=4703 pid=4884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.312000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:37:33.312000 audit: BPF prog-id=198 op=UNLOAD Jan 21 23:37:33.312000 audit[4884]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4703 pid=4884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.312000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:37:33.312000 audit: BPF prog-id=199 op=LOAD Jan 21 23:37:33.312000 audit[4884]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffddae7d8 a2=94 a3=2 items=0 ppid=4703 pid=4884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.312000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:37:33.315000 audit: BPF prog-id=199 op=UNLOAD Jan 21 23:37:33.315000 audit[4884]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4703 pid=4884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.315000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:37:33.315000 audit: BPF prog-id=200 op=LOAD Jan 21 23:37:33.315000 audit[4884]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffddae658 a2=40 a3=fffffddae688 items=0 ppid=4703 pid=4884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.315000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:37:33.315000 audit: BPF prog-id=200 op=UNLOAD Jan 21 23:37:33.315000 audit[4884]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=fffffddae688 items=0 ppid=4703 pid=4884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.315000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:37:33.315000 audit: BPF prog-id=201 op=LOAD Jan 21 23:37:33.315000 audit[4884]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffddae7a8 a2=94 a3=b7 items=0 ppid=4703 pid=4884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.315000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:37:33.315000 audit: BPF prog-id=201 op=UNLOAD Jan 21 23:37:33.315000 audit[4884]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4703 pid=4884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.315000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:37:33.324000 audit: BPF prog-id=202 op=LOAD Jan 21 23:37:33.324000 audit[4884]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffddade58 a2=94 a3=2 items=0 ppid=4703 pid=4884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.324000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:37:33.324000 audit: BPF prog-id=202 op=UNLOAD Jan 21 23:37:33.324000 audit[4884]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4703 pid=4884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.324000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:37:33.324000 audit: BPF prog-id=203 op=LOAD Jan 21 23:37:33.324000 audit[4884]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffddadfe8 a2=94 a3=30 items=0 ppid=4703 pid=4884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.324000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 23:37:33.355000 audit: BPF prog-id=204 op=LOAD Jan 21 23:37:33.355000 audit[4891]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcc07b7e8 a2=98 a3=ffffcc07b7d8 items=0 ppid=4703 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.355000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:37:33.356000 audit: BPF prog-id=204 op=UNLOAD Jan 21 23:37:33.356000 audit[4891]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffcc07b7b8 a3=0 items=0 ppid=4703 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.356000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:37:33.357000 audit: BPF prog-id=205 op=LOAD Jan 21 23:37:33.357000 audit[4891]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcc07b478 a2=74 a3=95 items=0 ppid=4703 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.357000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:37:33.357000 audit: BPF prog-id=205 op=UNLOAD Jan 21 23:37:33.357000 audit[4891]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4703 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.357000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:37:33.358000 audit: BPF prog-id=206 op=LOAD Jan 21 23:37:33.358000 audit[4891]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcc07b4d8 a2=94 a3=2 items=0 ppid=4703 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.358000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:37:33.358000 audit: BPF prog-id=206 op=UNLOAD Jan 21 23:37:33.358000 audit[4891]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4703 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.358000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:37:33.419502 systemd-networkd[1597]: cali81664905efe: Link UP Jan 21 23:37:33.421379 systemd-networkd[1597]: cali81664905efe: Gained carrier Jan 21 23:37:33.565595 containerd[2003]: 2026-01-21 23:37:32.051 [INFO][4768] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 21 23:37:33.565595 containerd[2003]: 2026-01-21 23:37:33.064 [INFO][4768] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--34-k8s-whisker--64b9b9d79c--64kd2-eth0 whisker-64b9b9d79c- calico-system b82636a0-c786-4a73-90fe-e05d2e69a656 953 0 2026-01-21 23:37:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:64b9b9d79c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-29-34 whisker-64b9b9d79c-64kd2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali81664905efe [] [] }} ContainerID="be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004" Namespace="calico-system" Pod="whisker-64b9b9d79c-64kd2" WorkloadEndpoint="ip--172--31--29--34-k8s-whisker--64b9b9d79c--64kd2-" Jan 21 23:37:33.565595 containerd[2003]: 2026-01-21 23:37:33.064 [INFO][4768] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004" Namespace="calico-system" Pod="whisker-64b9b9d79c-64kd2" WorkloadEndpoint="ip--172--31--29--34-k8s-whisker--64b9b9d79c--64kd2-eth0" Jan 21 23:37:33.565595 containerd[2003]: 2026-01-21 23:37:33.268 [INFO][4861] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004" HandleID="k8s-pod-network.be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004" Workload="ip--172--31--29--34-k8s-whisker--64b9b9d79c--64kd2-eth0" Jan 21 23:37:33.569666 containerd[2003]: 2026-01-21 23:37:33.269 [INFO][4861] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004" HandleID="k8s-pod-network.be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004" Workload="ip--172--31--29--34-k8s-whisker--64b9b9d79c--64kd2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3e20), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-34", "pod":"whisker-64b9b9d79c-64kd2", "timestamp":"2026-01-21 23:37:33.268732812 +0000 UTC"}, Hostname:"ip-172-31-29-34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 23:37:33.569666 containerd[2003]: 2026-01-21 23:37:33.269 [INFO][4861] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 23:37:33.569666 containerd[2003]: 2026-01-21 23:37:33.269 [INFO][4861] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 23:37:33.569666 containerd[2003]: 2026-01-21 23:37:33.269 [INFO][4861] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-34' Jan 21 23:37:33.569666 containerd[2003]: 2026-01-21 23:37:33.286 [INFO][4861] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004" host="ip-172-31-29-34" Jan 21 23:37:33.569666 containerd[2003]: 2026-01-21 23:37:33.295 [INFO][4861] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-34" Jan 21 23:37:33.569666 containerd[2003]: 2026-01-21 23:37:33.318 [INFO][4861] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:33.569666 containerd[2003]: 2026-01-21 23:37:33.324 [INFO][4861] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:33.569666 containerd[2003]: 2026-01-21 23:37:33.335 [INFO][4861] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:33.570239 containerd[2003]: 2026-01-21 23:37:33.335 [INFO][4861] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004" host="ip-172-31-29-34" Jan 21 23:37:33.570239 containerd[2003]: 2026-01-21 23:37:33.344 [INFO][4861] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004 Jan 21 23:37:33.570239 containerd[2003]: 2026-01-21 23:37:33.355 [INFO][4861] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004" host="ip-172-31-29-34" Jan 21 23:37:33.570239 containerd[2003]: 2026-01-21 23:37:33.369 [INFO][4861] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.129/26] block=192.168.73.128/26 handle="k8s-pod-network.be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004" host="ip-172-31-29-34" Jan 21 23:37:33.570239 containerd[2003]: 2026-01-21 23:37:33.369 [INFO][4861] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.129/26] handle="k8s-pod-network.be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004" host="ip-172-31-29-34" Jan 21 23:37:33.570239 containerd[2003]: 2026-01-21 23:37:33.369 [INFO][4861] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 23:37:33.570239 containerd[2003]: 2026-01-21 23:37:33.369 [INFO][4861] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.129/26] IPv6=[] ContainerID="be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004" HandleID="k8s-pod-network.be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004" Workload="ip--172--31--29--34-k8s-whisker--64b9b9d79c--64kd2-eth0" Jan 21 23:37:33.571231 containerd[2003]: 2026-01-21 23:37:33.378 [INFO][4768] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004" Namespace="calico-system" Pod="whisker-64b9b9d79c-64kd2" WorkloadEndpoint="ip--172--31--29--34-k8s-whisker--64b9b9d79c--64kd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--34-k8s-whisker--64b9b9d79c--64kd2-eth0", GenerateName:"whisker-64b9b9d79c-", Namespace:"calico-system", SelfLink:"", UID:"b82636a0-c786-4a73-90fe-e05d2e69a656", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 37, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64b9b9d79c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-34", ContainerID:"", Pod:"whisker-64b9b9d79c-64kd2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.73.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali81664905efe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:37:33.571231 containerd[2003]: 2026-01-21 23:37:33.378 [INFO][4768] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.129/32] ContainerID="be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004" Namespace="calico-system" Pod="whisker-64b9b9d79c-64kd2" WorkloadEndpoint="ip--172--31--29--34-k8s-whisker--64b9b9d79c--64kd2-eth0" Jan 21 23:37:33.571410 containerd[2003]: 2026-01-21 23:37:33.378 [INFO][4768] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81664905efe ContainerID="be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004" Namespace="calico-system" Pod="whisker-64b9b9d79c-64kd2" WorkloadEndpoint="ip--172--31--29--34-k8s-whisker--64b9b9d79c--64kd2-eth0" Jan 21 23:37:33.571410 containerd[2003]: 2026-01-21 23:37:33.452 [INFO][4768] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004" Namespace="calico-system" Pod="whisker-64b9b9d79c-64kd2" WorkloadEndpoint="ip--172--31--29--34-k8s-whisker--64b9b9d79c--64kd2-eth0" Jan 21 23:37:33.571515 containerd[2003]: 2026-01-21 23:37:33.452 [INFO][4768] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004" Namespace="calico-system" Pod="whisker-64b9b9d79c-64kd2" WorkloadEndpoint="ip--172--31--29--34-k8s-whisker--64b9b9d79c--64kd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--34-k8s-whisker--64b9b9d79c--64kd2-eth0", GenerateName:"whisker-64b9b9d79c-", Namespace:"calico-system", SelfLink:"", UID:"b82636a0-c786-4a73-90fe-e05d2e69a656", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 37, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64b9b9d79c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-34", ContainerID:"be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004", Pod:"whisker-64b9b9d79c-64kd2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.73.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali81664905efe", MAC:"56:73:84:e2:40:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:37:33.572611 containerd[2003]: 2026-01-21 23:37:33.561 [INFO][4768] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004" Namespace="calico-system" Pod="whisker-64b9b9d79c-64kd2" WorkloadEndpoint="ip--172--31--29--34-k8s-whisker--64b9b9d79c--64kd2-eth0" Jan 21 23:37:33.621424 containerd[2003]: time="2026-01-21T23:37:33.621350512Z" level=info msg="connecting to shim be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004" address="unix:///run/containerd/s/b456160f82e8b3e3d668669d9cc11cc15d79e9159215bcfec25f0e01550b8c5d" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:37:33.656000 audit: BPF prog-id=207 op=LOAD Jan 21 23:37:33.656000 audit[4891]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcc07b498 a2=40 a3=ffffcc07b4c8 items=0 ppid=4703 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.656000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:37:33.656000 audit: BPF prog-id=207 op=UNLOAD Jan 21 23:37:33.656000 audit[4891]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffcc07b4c8 items=0 ppid=4703 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.656000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:37:33.686417 systemd[1]: Started cri-containerd-be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004.scope - libcontainer container be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004. Jan 21 23:37:33.692000 audit: BPF prog-id=208 op=LOAD Jan 21 23:37:33.692000 audit[4891]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffcc07b4a8 a2=94 a3=4 items=0 ppid=4703 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.692000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:37:33.692000 audit: BPF prog-id=208 op=UNLOAD Jan 21 23:37:33.692000 audit[4891]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4703 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.692000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:37:33.692000 audit: BPF prog-id=209 op=LOAD Jan 21 23:37:33.692000 audit[4891]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcc07b2e8 a2=94 a3=5 items=0 ppid=4703 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.692000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:37:33.693000 audit: BPF prog-id=209 op=UNLOAD Jan 21 23:37:33.693000 audit[4891]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4703 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.693000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:37:33.693000 audit: BPF prog-id=210 op=LOAD Jan 21 23:37:33.693000 audit[4891]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffcc07b518 a2=94 a3=6 items=0 ppid=4703 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.693000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:37:33.693000 audit: BPF prog-id=210 op=UNLOAD Jan 21 23:37:33.693000 audit[4891]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4703 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.693000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:37:33.693000 audit: BPF prog-id=211 op=LOAD Jan 21 23:37:33.693000 audit[4891]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffcc07ace8 a2=94 a3=83 items=0 ppid=4703 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.693000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:37:33.694000 audit: BPF prog-id=212 op=LOAD Jan 21 23:37:33.694000 audit[4891]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffcc07aaa8 a2=94 a3=2 items=0 ppid=4703 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.694000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:37:33.694000 audit: BPF prog-id=212 op=UNLOAD Jan 21 23:37:33.694000 audit[4891]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4703 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.694000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:37:33.695000 audit: BPF prog-id=211 op=UNLOAD Jan 21 23:37:33.695000 audit[4891]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=9e6b620 a3=9e5eb00 items=0 ppid=4703 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.695000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 23:37:33.707000 audit: BPF prog-id=203 op=UNLOAD Jan 21 23:37:33.707000 audit[4703]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000820ac0 a2=0 a3=0 items=0 ppid=4680 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.707000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 21 23:37:33.727000 audit: BPF prog-id=213 op=LOAD Jan 21 23:37:33.729000 audit: BPF prog-id=214 op=LOAD Jan 21 23:37:33.729000 audit[4918]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4907 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265303638653661626234366663666165363363616331623564636537 Jan 21 23:37:33.730000 audit: BPF prog-id=214 op=UNLOAD Jan 21 23:37:33.730000 audit[4918]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4907 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.730000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265303638653661626234366663666165363363616331623564636537 Jan 21 23:37:33.731000 audit: BPF prog-id=215 op=LOAD Jan 21 23:37:33.731000 audit[4918]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4907 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265303638653661626234366663666165363363616331623564636537 Jan 21 23:37:33.732000 audit: BPF prog-id=216 op=LOAD Jan 21 23:37:33.732000 audit[4918]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4907 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265303638653661626234366663666165363363616331623564636537 Jan 21 23:37:33.733000 audit: BPF prog-id=216 op=UNLOAD Jan 21 23:37:33.733000 audit[4918]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4907 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265303638653661626234366663666165363363616331623564636537 Jan 21 23:37:33.733000 audit: BPF prog-id=215 op=UNLOAD Jan 21 23:37:33.733000 audit[4918]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4907 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265303638653661626234366663666165363363616331623564636537 Jan 21 23:37:33.734000 audit: BPF prog-id=217 op=LOAD Jan 21 23:37:33.734000 audit[4918]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4907 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265303638653661626234366663666165363363616331623564636537 Jan 21 23:37:33.806608 containerd[2003]: time="2026-01-21T23:37:33.806416805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64b9b9d79c-64kd2,Uid:b82636a0-c786-4a73-90fe-e05d2e69a656,Namespace:calico-system,Attempt:0,} returns sandbox id \"be068e6abb46fcfae63cac1b5dce79642e8985cf22a6af43aabdefd3d7973004\"" Jan 21 23:37:33.811665 containerd[2003]: time="2026-01-21T23:37:33.811246417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 23:37:33.889000 audit[4966]: NETFILTER_CFG table=nat:123 family=2 entries=15 op=nft_register_chain pid=4966 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:37:33.889000 audit[4966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffd8978fc0 a2=0 a3=ffffa3332fa8 items=0 ppid=4703 pid=4966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.889000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:37:33.891000 audit[4969]: NETFILTER_CFG table=mangle:124 family=2 entries=16 op=nft_register_chain pid=4969 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:37:33.891000 audit[4969]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffca96b440 a2=0 a3=ffff880c3fa8 items=0 ppid=4703 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.891000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:37:33.896000 audit[4965]: NETFILTER_CFG table=raw:125 family=2 entries=21 op=nft_register_chain pid=4965 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:37:33.896000 audit[4965]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=fffffa690580 a2=0 a3=ffff91e59fa8 items=0 ppid=4703 pid=4965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.896000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:37:33.901000 audit[4967]: NETFILTER_CFG table=filter:126 family=2 entries=39 op=nft_register_chain pid=4967 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:37:33.901000 audit[4967]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18968 a0=3 a1=ffffe1b54b60 a2=0 a3=ffff8fe11fa8 items=0 ppid=4703 pid=4967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.901000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:37:33.948000 audit[4976]: NETFILTER_CFG table=filter:127 family=2 entries=59 op=nft_register_chain pid=4976 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:37:33.948000 audit[4976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=35860 a0=3 a1=ffffdff39bc0 a2=0 a3=ffff7f5abfa8 items=0 ppid=4703 pid=4976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:33.948000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:37:34.119195 containerd[2003]: time="2026-01-21T23:37:34.117968548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jq2tn,Uid:c9be5f6a-b238-4709-b078-d405c449b532,Namespace:calico-system,Attempt:0,}" Jan 21 23:37:34.119195 containerd[2003]: time="2026-01-21T23:37:34.118852050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rzmxg,Uid:62aba3b1-39b5-4efd-90ba-082af4dc8ffb,Namespace:kube-system,Attempt:0,}" Jan 21 23:37:34.123690 containerd[2003]: time="2026-01-21T23:37:34.123550064Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:37:34.126031 containerd[2003]: time="2026-01-21T23:37:34.124780289Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 23:37:34.126031 containerd[2003]: time="2026-01-21T23:37:34.124908181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 23:37:34.130086 kubelet[3515]: E0121 23:37:34.126486 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 23:37:34.133245 kubelet[3515]: E0121 23:37:34.130106 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 23:37:34.157709 kubelet[3515]: E0121 23:37:34.157373 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:24f3b654eaca42c1be474f4c2fb54f82,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bbxdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64b9b9d79c-64kd2_calico-system(b82636a0-c786-4a73-90fe-e05d2e69a656): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 23:37:34.163896 containerd[2003]: time="2026-01-21T23:37:34.163833401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 23:37:34.506227 systemd-networkd[1597]: cali2473b56ca62: Link UP Jan 21 23:37:34.506687 systemd-networkd[1597]: cali2473b56ca62: Gained carrier Jan 21 23:37:34.529043 containerd[2003]: time="2026-01-21T23:37:34.526948010Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:37:34.536787 containerd[2003]: time="2026-01-21T23:37:34.536622910Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 23:37:34.537293 containerd[2003]: time="2026-01-21T23:37:34.536955575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 23:37:34.538736 kubelet[3515]: E0121 23:37:34.538132 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 23:37:34.538736 kubelet[3515]: E0121 23:37:34.538199 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 23:37:34.539115 kubelet[3515]: E0121 23:37:34.538396 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbxdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64b9b9d79c-64kd2_calico-system(b82636a0-c786-4a73-90fe-e05d2e69a656): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 23:37:34.540492 kubelet[3515]: E0121 23:37:34.540294 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64b9b9d79c-64kd2" podUID="b82636a0-c786-4a73-90fe-e05d2e69a656" Jan 21 23:37:34.560683 containerd[2003]: 2026-01-21 23:37:34.288 [INFO][4987] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--34-k8s-coredns--668d6bf9bc--rzmxg-eth0 coredns-668d6bf9bc- kube-system 62aba3b1-39b5-4efd-90ba-082af4dc8ffb 879 0 2026-01-21 23:36:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-34 coredns-668d6bf9bc-rzmxg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2473b56ca62 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzmxg" WorkloadEndpoint="ip--172--31--29--34-k8s-coredns--668d6bf9bc--rzmxg-" Jan 21 23:37:34.560683 containerd[2003]: 2026-01-21 23:37:34.289 [INFO][4987] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzmxg" WorkloadEndpoint="ip--172--31--29--34-k8s-coredns--668d6bf9bc--rzmxg-eth0" Jan 21 23:37:34.560683 containerd[2003]: 2026-01-21 23:37:34.385 [INFO][5009] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970" HandleID="k8s-pod-network.34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970" Workload="ip--172--31--29--34-k8s-coredns--668d6bf9bc--rzmxg-eth0" Jan 21 23:37:34.561019 containerd[2003]: 2026-01-21 23:37:34.385 [INFO][5009] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970" HandleID="k8s-pod-network.34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970" Workload="ip--172--31--29--34-k8s-coredns--668d6bf9bc--rzmxg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031ae00), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-34", "pod":"coredns-668d6bf9bc-rzmxg", "timestamp":"2026-01-21 23:37:34.385146234 +0000 UTC"}, Hostname:"ip-172-31-29-34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 23:37:34.561019 containerd[2003]: 2026-01-21 23:37:34.385 [INFO][5009] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 23:37:34.561019 containerd[2003]: 2026-01-21 23:37:34.385 [INFO][5009] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 23:37:34.561019 containerd[2003]: 2026-01-21 23:37:34.385 [INFO][5009] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-34' Jan 21 23:37:34.561019 containerd[2003]: 2026-01-21 23:37:34.412 [INFO][5009] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970" host="ip-172-31-29-34" Jan 21 23:37:34.561019 containerd[2003]: 2026-01-21 23:37:34.435 [INFO][5009] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-34" Jan 21 23:37:34.561019 containerd[2003]: 2026-01-21 23:37:34.456 [INFO][5009] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:34.561019 containerd[2003]: 2026-01-21 23:37:34.459 [INFO][5009] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:34.561019 containerd[2003]: 2026-01-21 23:37:34.465 [INFO][5009] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:34.561019 containerd[2003]: 2026-01-21 23:37:34.465 [INFO][5009] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970" host="ip-172-31-29-34" Jan 21 23:37:34.561530 containerd[2003]: 2026-01-21 23:37:34.469 [INFO][5009] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970 Jan 21 23:37:34.561530 containerd[2003]: 2026-01-21 23:37:34.476 [INFO][5009] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970" host="ip-172-31-29-34" Jan 21 23:37:34.561530 containerd[2003]: 2026-01-21 23:37:34.488 [INFO][5009] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.130/26] block=192.168.73.128/26 handle="k8s-pod-network.34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970" host="ip-172-31-29-34" Jan 21 23:37:34.561530 containerd[2003]: 2026-01-21 23:37:34.488 [INFO][5009] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.130/26] handle="k8s-pod-network.34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970" host="ip-172-31-29-34" Jan 21 23:37:34.561530 containerd[2003]: 2026-01-21 23:37:34.488 [INFO][5009] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 23:37:34.561530 containerd[2003]: 2026-01-21 23:37:34.488 [INFO][5009] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.130/26] IPv6=[] ContainerID="34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970" HandleID="k8s-pod-network.34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970" Workload="ip--172--31--29--34-k8s-coredns--668d6bf9bc--rzmxg-eth0" Jan 21 23:37:34.563719 containerd[2003]: 2026-01-21 23:37:34.495 [INFO][4987] cni-plugin/k8s.go 418: Populated endpoint ContainerID="34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzmxg" WorkloadEndpoint="ip--172--31--29--34-k8s-coredns--668d6bf9bc--rzmxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--34-k8s-coredns--668d6bf9bc--rzmxg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"62aba3b1-39b5-4efd-90ba-082af4dc8ffb", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-34", ContainerID:"", Pod:"coredns-668d6bf9bc-rzmxg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2473b56ca62", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:37:34.563719 containerd[2003]: 2026-01-21 23:37:34.495 [INFO][4987] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.130/32] ContainerID="34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzmxg" WorkloadEndpoint="ip--172--31--29--34-k8s-coredns--668d6bf9bc--rzmxg-eth0" Jan 21 23:37:34.563719 containerd[2003]: 2026-01-21 23:37:34.496 [INFO][4987] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2473b56ca62 ContainerID="34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzmxg" WorkloadEndpoint="ip--172--31--29--34-k8s-coredns--668d6bf9bc--rzmxg-eth0" Jan 21 23:37:34.563719 containerd[2003]: 2026-01-21 23:37:34.510 [INFO][4987] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzmxg" WorkloadEndpoint="ip--172--31--29--34-k8s-coredns--668d6bf9bc--rzmxg-eth0" Jan 21 23:37:34.563719 containerd[2003]: 2026-01-21 23:37:34.519 [INFO][4987] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzmxg" WorkloadEndpoint="ip--172--31--29--34-k8s-coredns--668d6bf9bc--rzmxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--34-k8s-coredns--668d6bf9bc--rzmxg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"62aba3b1-39b5-4efd-90ba-082af4dc8ffb", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-34", ContainerID:"34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970", Pod:"coredns-668d6bf9bc-rzmxg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2473b56ca62", MAC:"e2:a1:19:f1:49:80", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:37:34.563719 containerd[2003]: 2026-01-21 23:37:34.549 [INFO][4987] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzmxg" WorkloadEndpoint="ip--172--31--29--34-k8s-coredns--668d6bf9bc--rzmxg-eth0" Jan 21 23:37:34.700521 systemd-networkd[1597]: cali0dc6d58d960: Link UP Jan 21 23:37:34.709116 systemd-networkd[1597]: cali0dc6d58d960: Gained carrier Jan 21 23:37:34.719667 containerd[2003]: time="2026-01-21T23:37:34.719478985Z" level=info msg="connecting to shim 34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970" address="unix:///run/containerd/s/4b2037d4a70eef9e7be2ae27cfa9c4f9ce41b39f81f31ac00b06cb16c54bd543" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:37:34.781053 kernel: kauditd_printk_skb: 228 callbacks suppressed Jan 21 23:37:34.781214 kernel: audit: type=1325 audit(1769038654.777:673): table=filter:128 family=2 entries=42 op=nft_register_chain pid=5045 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:37:34.777000 audit[5045]: NETFILTER_CFG table=filter:128 family=2 entries=42 op=nft_register_chain pid=5045 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:37:34.777000 audit[5045]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22552 a0=3 a1=ffffe7443100 a2=0 a3=ffffbca00fa8 items=0 ppid=4703 pid=5045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:34.794710 kernel: audit: type=1300 audit(1769038654.777:673): arch=c00000b7 syscall=211 success=yes exit=22552 a0=3 a1=ffffe7443100 a2=0 a3=ffffbca00fa8 items=0 ppid=4703 pid=5045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:34.777000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:37:34.807008 kernel: audit: type=1327 audit(1769038654.777:673): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:37:34.810029 containerd[2003]: 2026-01-21 23:37:34.338 [INFO][4983] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--34-k8s-goldmane--666569f655--jq2tn-eth0 goldmane-666569f655- calico-system c9be5f6a-b238-4709-b078-d405c449b532 892 0 2026-01-21 23:37:07 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-29-34 goldmane-666569f655-jq2tn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0dc6d58d960 [] [] }} ContainerID="7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894" Namespace="calico-system" Pod="goldmane-666569f655-jq2tn" WorkloadEndpoint="ip--172--31--29--34-k8s-goldmane--666569f655--jq2tn-" Jan 21 23:37:34.810029 containerd[2003]: 2026-01-21 23:37:34.339 [INFO][4983] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894" Namespace="calico-system" Pod="goldmane-666569f655-jq2tn" WorkloadEndpoint="ip--172--31--29--34-k8s-goldmane--666569f655--jq2tn-eth0" Jan 21 23:37:34.810029 containerd[2003]: 2026-01-21 23:37:34.451 [INFO][5015] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894" HandleID="k8s-pod-network.7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894" Workload="ip--172--31--29--34-k8s-goldmane--666569f655--jq2tn-eth0" Jan 21 23:37:34.810029 containerd[2003]: 2026-01-21 23:37:34.452 [INFO][5015] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894" HandleID="k8s-pod-network.7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894" Workload="ip--172--31--29--34-k8s-goldmane--666569f655--jq2tn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032b840), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-34", "pod":"goldmane-666569f655-jq2tn", "timestamp":"2026-01-21 23:37:34.451472478 +0000 UTC"}, Hostname:"ip-172-31-29-34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 23:37:34.810029 containerd[2003]: 2026-01-21 23:37:34.453 [INFO][5015] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 23:37:34.810029 containerd[2003]: 2026-01-21 23:37:34.489 [INFO][5015] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 23:37:34.810029 containerd[2003]: 2026-01-21 23:37:34.489 [INFO][5015] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-34' Jan 21 23:37:34.810029 containerd[2003]: 2026-01-21 23:37:34.526 [INFO][5015] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894" host="ip-172-31-29-34" Jan 21 23:37:34.810029 containerd[2003]: 2026-01-21 23:37:34.540 [INFO][5015] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-34" Jan 21 23:37:34.810029 containerd[2003]: 2026-01-21 23:37:34.563 [INFO][5015] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:34.810029 containerd[2003]: 2026-01-21 23:37:34.569 [INFO][5015] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:34.810029 containerd[2003]: 2026-01-21 23:37:34.582 [INFO][5015] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:34.810029 containerd[2003]: 2026-01-21 23:37:34.582 [INFO][5015] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894" host="ip-172-31-29-34" Jan 21 23:37:34.810029 containerd[2003]: 2026-01-21 23:37:34.588 [INFO][5015] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894 Jan 21 23:37:34.810029 containerd[2003]: 2026-01-21 23:37:34.609 [INFO][5015] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894" host="ip-172-31-29-34" Jan 21 23:37:34.810029 containerd[2003]: 2026-01-21 23:37:34.647 [INFO][5015] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.131/26] block=192.168.73.128/26 handle="k8s-pod-network.7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894" host="ip-172-31-29-34" Jan 21 23:37:34.810029 containerd[2003]: 2026-01-21 23:37:34.648 [INFO][5015] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.131/26] handle="k8s-pod-network.7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894" host="ip-172-31-29-34" Jan 21 23:37:34.810029 containerd[2003]: 2026-01-21 23:37:34.649 [INFO][5015] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 23:37:34.810029 containerd[2003]: 2026-01-21 23:37:34.649 [INFO][5015] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.131/26] IPv6=[] ContainerID="7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894" HandleID="k8s-pod-network.7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894" Workload="ip--172--31--29--34-k8s-goldmane--666569f655--jq2tn-eth0" Jan 21 23:37:34.811271 containerd[2003]: 2026-01-21 23:37:34.668 [INFO][4983] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894" Namespace="calico-system" Pod="goldmane-666569f655-jq2tn" WorkloadEndpoint="ip--172--31--29--34-k8s-goldmane--666569f655--jq2tn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--34-k8s-goldmane--666569f655--jq2tn-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c9be5f6a-b238-4709-b078-d405c449b532", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 37, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-34", ContainerID:"", Pod:"goldmane-666569f655-jq2tn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.73.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0dc6d58d960", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:37:34.811271 containerd[2003]: 2026-01-21 23:37:34.668 [INFO][4983] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.131/32] ContainerID="7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894" Namespace="calico-system" Pod="goldmane-666569f655-jq2tn" WorkloadEndpoint="ip--172--31--29--34-k8s-goldmane--666569f655--jq2tn-eth0" Jan 21 23:37:34.811271 containerd[2003]: 2026-01-21 23:37:34.668 [INFO][4983] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0dc6d58d960 ContainerID="7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894" Namespace="calico-system" Pod="goldmane-666569f655-jq2tn" WorkloadEndpoint="ip--172--31--29--34-k8s-goldmane--666569f655--jq2tn-eth0" Jan 21 23:37:34.811271 containerd[2003]: 2026-01-21 23:37:34.722 [INFO][4983] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894" Namespace="calico-system" Pod="goldmane-666569f655-jq2tn" WorkloadEndpoint="ip--172--31--29--34-k8s-goldmane--666569f655--jq2tn-eth0" Jan 21 23:37:34.811271 containerd[2003]: 2026-01-21 23:37:34.727 [INFO][4983] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894" Namespace="calico-system" Pod="goldmane-666569f655-jq2tn" WorkloadEndpoint="ip--172--31--29--34-k8s-goldmane--666569f655--jq2tn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--34-k8s-goldmane--666569f655--jq2tn-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c9be5f6a-b238-4709-b078-d405c449b532", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 37, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-34", ContainerID:"7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894", Pod:"goldmane-666569f655-jq2tn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.73.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0dc6d58d960", MAC:"b6:32:3a:77:8f:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:37:34.811271 containerd[2003]: 2026-01-21 23:37:34.777 [INFO][4983] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894" Namespace="calico-system" Pod="goldmane-666569f655-jq2tn" WorkloadEndpoint="ip--172--31--29--34-k8s-goldmane--666569f655--jq2tn-eth0" Jan 21 23:37:34.835222 systemd-networkd[1597]: vxlan.calico: Gained IPv6LL Jan 21 23:37:34.923709 systemd[1]: Started cri-containerd-34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970.scope - libcontainer container 34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970. Jan 21 23:37:34.937226 containerd[2003]: time="2026-01-21T23:37:34.937169030Z" level=info msg="connecting to shim 7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894" address="unix:///run/containerd/s/542c275f4eaadd5949923e54006d8bd3f3b07dd68f6eb6a6bef9e779ef1f020f" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:37:34.985000 audit: BPF prog-id=218 op=LOAD Jan 21 23:37:34.991308 kernel: audit: type=1334 audit(1769038654.985:674): prog-id=218 op=LOAD Jan 21 23:37:34.991000 audit: BPF prog-id=219 op=LOAD Jan 21 23:37:34.991000 audit[5057]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e180 a2=98 a3=0 items=0 ppid=5043 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.002043 kernel: audit: type=1334 audit(1769038654.991:675): prog-id=219 op=LOAD Jan 21 23:37:35.002841 kernel: audit: type=1300 audit(1769038654.991:675): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e180 a2=98 a3=0 items=0 ppid=5043 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.007363 kernel: audit: type=1327 audit(1769038654.991:675): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334363037323433623666333333626163666633353735643263353034 Jan 21 23:37:34.991000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334363037323433623666333333626163666633353735643263353034 Jan 21 23:37:35.001000 audit: BPF prog-id=219 op=UNLOAD Jan 21 23:37:35.019526 kernel: audit: type=1334 audit(1769038655.001:676): prog-id=219 op=UNLOAD Jan 21 23:37:35.001000 audit[5057]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5043 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.026952 kernel: audit: type=1300 audit(1769038655.001:676): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5043 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.028503 kernel: audit: type=1327 audit(1769038655.001:676): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334363037323433623666333333626163666633353735643263353034 Jan 21 23:37:35.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334363037323433623666333333626163666633353735643263353034 Jan 21 23:37:35.001000 audit: BPF prog-id=220 op=LOAD Jan 21 23:37:35.001000 audit[5057]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e3e8 a2=98 a3=0 items=0 ppid=5043 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334363037323433623666333333626163666633353735643263353034 Jan 21 23:37:35.001000 audit: BPF prog-id=221 op=LOAD Jan 21 23:37:35.001000 audit[5057]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400017e168 a2=98 a3=0 items=0 ppid=5043 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334363037323433623666333333626163666633353735643263353034 Jan 21 23:37:35.001000 audit: BPF prog-id=221 op=UNLOAD Jan 21 23:37:35.001000 audit[5057]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5043 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334363037323433623666333333626163666633353735643263353034 Jan 21 23:37:35.001000 audit: BPF prog-id=220 op=UNLOAD Jan 21 23:37:35.001000 audit[5057]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5043 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334363037323433623666333333626163666633353735643263353034 Jan 21 23:37:35.003000 audit[5110]: NETFILTER_CFG table=filter:129 family=2 entries=48 op=nft_register_chain pid=5110 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:37:35.003000 audit[5110]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26368 a0=3 a1=ffffd126cab0 a2=0 a3=ffffa37cefa8 items=0 ppid=4703 pid=5110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.003000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:37:35.001000 audit: BPF prog-id=222 op=LOAD Jan 21 23:37:35.001000 audit[5057]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e648 a2=98 a3=0 items=0 ppid=5043 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334363037323433623666333333626163666633353735643263353034 Jan 21 23:37:35.037305 systemd[1]: Started cri-containerd-7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894.scope - libcontainer container 7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894. Jan 21 23:37:35.090291 systemd-networkd[1597]: cali81664905efe: Gained IPv6LL Jan 21 23:37:35.103317 containerd[2003]: time="2026-01-21T23:37:35.102607274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-glnmd,Uid:eed98923-e571-4f3e-9b7a-fa237350831a,Namespace:kube-system,Attempt:0,}" Jan 21 23:37:35.105358 containerd[2003]: time="2026-01-21T23:37:35.104026932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84667c98fc-g2rqz,Uid:de79900c-2c4b-48e5-9995-14ed014509c5,Namespace:calico-apiserver,Attempt:0,}" Jan 21 23:37:35.146000 audit: BPF prog-id=223 op=LOAD Jan 21 23:37:35.149000 audit: BPF prog-id=224 op=LOAD Jan 21 23:37:35.149000 audit[5104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=5086 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763303137386437316434663965306135656630656633376565346561 Jan 21 23:37:35.151000 audit: BPF prog-id=224 op=UNLOAD Jan 21 23:37:35.151000 audit[5104]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5086 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763303137386437316434663965306135656630656633376565346561 Jan 21 23:37:35.153000 audit: BPF prog-id=225 op=LOAD Jan 21 23:37:35.153000 audit[5104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=5086 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763303137386437316434663965306135656630656633376565346561 Jan 21 23:37:35.157460 containerd[2003]: time="2026-01-21T23:37:35.156800544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rzmxg,Uid:62aba3b1-39b5-4efd-90ba-082af4dc8ffb,Namespace:kube-system,Attempt:0,} returns sandbox id \"34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970\"" Jan 21 23:37:35.156000 audit: BPF prog-id=226 op=LOAD Jan 21 23:37:35.156000 audit[5104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=5086 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763303137386437316434663965306135656630656633376565346561 Jan 21 23:37:35.158000 audit: BPF prog-id=226 op=UNLOAD Jan 21 23:37:35.158000 audit[5104]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5086 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763303137386437316434663965306135656630656633376565346561 Jan 21 23:37:35.162000 audit: BPF prog-id=225 op=UNLOAD Jan 21 23:37:35.162000 audit[5104]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5086 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763303137386437316434663965306135656630656633376565346561 Jan 21 23:37:35.172836 containerd[2003]: time="2026-01-21T23:37:35.172372606Z" level=info msg="CreateContainer within sandbox \"34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 21 23:37:35.162000 audit: BPF prog-id=227 op=LOAD Jan 21 23:37:35.162000 audit[5104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=5086 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763303137386437316434663965306135656630656633376565346561 Jan 21 23:37:35.218730 containerd[2003]: time="2026-01-21T23:37:35.218662621Z" level=info msg="Container 45d0ac8a8b150b5c67ea811e403c11204e388d0792e0437c92e5c64d7e708af6: CDI devices from CRI Config.CDIDevices: []" Jan 21 23:37:35.245139 containerd[2003]: time="2026-01-21T23:37:35.244091125Z" level=info msg="CreateContainer within sandbox \"34607243b6f333bacff3575d2c50462f6f921af539473f337788996d48050970\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"45d0ac8a8b150b5c67ea811e403c11204e388d0792e0437c92e5c64d7e708af6\"" Jan 21 23:37:35.246333 containerd[2003]: time="2026-01-21T23:37:35.246277607Z" level=info msg="StartContainer for \"45d0ac8a8b150b5c67ea811e403c11204e388d0792e0437c92e5c64d7e708af6\"" Jan 21 23:37:35.252305 containerd[2003]: time="2026-01-21T23:37:35.251716934Z" level=info msg="connecting to shim 45d0ac8a8b150b5c67ea811e403c11204e388d0792e0437c92e5c64d7e708af6" address="unix:///run/containerd/s/4b2037d4a70eef9e7be2ae27cfa9c4f9ce41b39f81f31ac00b06cb16c54bd543" protocol=ttrpc version=3 Jan 21 23:37:35.373330 containerd[2003]: time="2026-01-21T23:37:35.372505546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jq2tn,Uid:c9be5f6a-b238-4709-b078-d405c449b532,Namespace:calico-system,Attempt:0,} returns sandbox id \"7c0178d71d4f9e0a5ef0ef37ee4ea519754f832826d27ae6dfbba9d2a4c29894\"" Jan 21 23:37:35.382775 containerd[2003]: time="2026-01-21T23:37:35.380303985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 23:37:35.380732 systemd[1]: Started cri-containerd-45d0ac8a8b150b5c67ea811e403c11204e388d0792e0437c92e5c64d7e708af6.scope - libcontainer container 45d0ac8a8b150b5c67ea811e403c11204e388d0792e0437c92e5c64d7e708af6. Jan 21 23:37:35.440000 audit: BPF prog-id=228 op=LOAD Jan 21 23:37:35.442000 audit: BPF prog-id=229 op=LOAD Jan 21 23:37:35.442000 audit[5154]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5043 pid=5154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435643061633861386231353062356336376561383131653430336331 Jan 21 23:37:35.442000 audit: BPF prog-id=229 op=UNLOAD Jan 21 23:37:35.442000 audit[5154]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5043 pid=5154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435643061633861386231353062356336376561383131653430336331 Jan 21 23:37:35.444000 audit: BPF prog-id=230 op=LOAD Jan 21 23:37:35.444000 audit[5154]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5043 pid=5154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435643061633861386231353062356336376561383131653430336331 Jan 21 23:37:35.445000 audit: BPF prog-id=231 op=LOAD Jan 21 23:37:35.445000 audit[5154]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=5043 pid=5154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435643061633861386231353062356336376561383131653430336331 Jan 21 23:37:35.446000 audit: BPF prog-id=231 op=UNLOAD Jan 21 23:37:35.446000 audit[5154]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5043 pid=5154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435643061633861386231353062356336376561383131653430336331 Jan 21 23:37:35.446000 audit: BPF prog-id=230 op=UNLOAD Jan 21 23:37:35.446000 audit[5154]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5043 pid=5154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435643061633861386231353062356336376561383131653430336331 Jan 21 23:37:35.447000 audit: BPF prog-id=232 op=LOAD Jan 21 23:37:35.447000 audit[5154]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=5043 pid=5154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.447000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435643061633861386231353062356336376561383131653430336331 Jan 21 23:37:35.515461 containerd[2003]: time="2026-01-21T23:37:35.515395921Z" level=info msg="StartContainer for \"45d0ac8a8b150b5c67ea811e403c11204e388d0792e0437c92e5c64d7e708af6\" returns successfully" Jan 21 23:37:35.551270 kubelet[3515]: E0121 23:37:35.551080 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64b9b9d79c-64kd2" podUID="b82636a0-c786-4a73-90fe-e05d2e69a656" Jan 21 23:37:35.611415 kubelet[3515]: I0121 23:37:35.611307 3515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rzmxg" podStartSLOduration=58.611281235 podStartE2EDuration="58.611281235s" podCreationTimestamp="2026-01-21 23:36:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 23:37:35.580545339 +0000 UTC m=+61.883013368" watchObservedRunningTime="2026-01-21 23:37:35.611281235 +0000 UTC m=+61.913749252" Jan 21 23:37:35.670047 systemd-networkd[1597]: cali86f5d6cbaa8: Link UP Jan 21 23:37:35.676513 systemd-networkd[1597]: cali86f5d6cbaa8: Gained carrier Jan 21 23:37:35.694469 containerd[2003]: time="2026-01-21T23:37:35.694337047Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:37:35.699018 containerd[2003]: time="2026-01-21T23:37:35.698289070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 23:37:35.700163 containerd[2003]: time="2026-01-21T23:37:35.699174311Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 23:37:35.701552 kubelet[3515]: E0121 23:37:35.701489 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 23:37:35.701688 kubelet[3515]: E0121 23:37:35.701556 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 23:37:35.702522 kubelet[3515]: E0121 23:37:35.701769 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdxpq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jq2tn_calico-system(c9be5f6a-b238-4709-b078-d405c449b532): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 23:37:35.703849 kubelet[3515]: E0121 23:37:35.703402 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jq2tn" podUID="c9be5f6a-b238-4709-b078-d405c449b532" Jan 21 23:37:35.734288 containerd[2003]: 2026-01-21 23:37:35.370 [INFO][5139] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--34-k8s-coredns--668d6bf9bc--glnmd-eth0 coredns-668d6bf9bc- kube-system eed98923-e571-4f3e-9b7a-fa237350831a 887 0 2026-01-21 23:36:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-34 coredns-668d6bf9bc-glnmd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali86f5d6cbaa8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-glnmd" WorkloadEndpoint="ip--172--31--29--34-k8s-coredns--668d6bf9bc--glnmd-" Jan 21 23:37:35.734288 containerd[2003]: 2026-01-21 23:37:35.377 [INFO][5139] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-glnmd" WorkloadEndpoint="ip--172--31--29--34-k8s-coredns--668d6bf9bc--glnmd-eth0" Jan 21 23:37:35.734288 containerd[2003]: 2026-01-21 23:37:35.490 [INFO][5191] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed" HandleID="k8s-pod-network.37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed" Workload="ip--172--31--29--34-k8s-coredns--668d6bf9bc--glnmd-eth0" Jan 21 23:37:35.734288 containerd[2003]: 2026-01-21 23:37:35.490 [INFO][5191] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed" HandleID="k8s-pod-network.37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed" Workload="ip--172--31--29--34-k8s-coredns--668d6bf9bc--glnmd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000321820), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-34", "pod":"coredns-668d6bf9bc-glnmd", "timestamp":"2026-01-21 23:37:35.49045465 +0000 UTC"}, Hostname:"ip-172-31-29-34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 23:37:35.734288 containerd[2003]: 2026-01-21 23:37:35.490 [INFO][5191] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 23:37:35.734288 containerd[2003]: 2026-01-21 23:37:35.490 [INFO][5191] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 23:37:35.734288 containerd[2003]: 2026-01-21 23:37:35.491 [INFO][5191] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-34' Jan 21 23:37:35.734288 containerd[2003]: 2026-01-21 23:37:35.520 [INFO][5191] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed" host="ip-172-31-29-34" Jan 21 23:37:35.734288 containerd[2003]: 2026-01-21 23:37:35.543 [INFO][5191] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-34" Jan 21 23:37:35.734288 containerd[2003]: 2026-01-21 23:37:35.570 [INFO][5191] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:35.734288 containerd[2003]: 2026-01-21 23:37:35.581 [INFO][5191] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:35.734288 containerd[2003]: 2026-01-21 23:37:35.591 [INFO][5191] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:35.734288 containerd[2003]: 2026-01-21 23:37:35.591 [INFO][5191] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed" host="ip-172-31-29-34" Jan 21 23:37:35.734288 containerd[2003]: 2026-01-21 23:37:35.603 [INFO][5191] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed Jan 21 23:37:35.734288 containerd[2003]: 2026-01-21 23:37:35.623 [INFO][5191] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed" host="ip-172-31-29-34" Jan 21 23:37:35.734288 containerd[2003]: 2026-01-21 23:37:35.644 [INFO][5191] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.132/26] block=192.168.73.128/26 handle="k8s-pod-network.37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed" host="ip-172-31-29-34" Jan 21 23:37:35.734288 containerd[2003]: 2026-01-21 23:37:35.645 [INFO][5191] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.132/26] handle="k8s-pod-network.37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed" host="ip-172-31-29-34" Jan 21 23:37:35.734288 containerd[2003]: 2026-01-21 23:37:35.646 [INFO][5191] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 23:37:35.734288 containerd[2003]: 2026-01-21 23:37:35.646 [INFO][5191] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.132/26] IPv6=[] ContainerID="37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed" HandleID="k8s-pod-network.37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed" Workload="ip--172--31--29--34-k8s-coredns--668d6bf9bc--glnmd-eth0" Jan 21 23:37:35.739024 containerd[2003]: 2026-01-21 23:37:35.654 [INFO][5139] cni-plugin/k8s.go 418: Populated endpoint ContainerID="37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-glnmd" WorkloadEndpoint="ip--172--31--29--34-k8s-coredns--668d6bf9bc--glnmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--34-k8s-coredns--668d6bf9bc--glnmd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"eed98923-e571-4f3e-9b7a-fa237350831a", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-34", ContainerID:"", Pod:"coredns-668d6bf9bc-glnmd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86f5d6cbaa8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:37:35.739024 containerd[2003]: 2026-01-21 23:37:35.656 [INFO][5139] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.132/32] ContainerID="37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-glnmd" WorkloadEndpoint="ip--172--31--29--34-k8s-coredns--668d6bf9bc--glnmd-eth0" Jan 21 23:37:35.739024 containerd[2003]: 2026-01-21 23:37:35.656 [INFO][5139] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86f5d6cbaa8 ContainerID="37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-glnmd" WorkloadEndpoint="ip--172--31--29--34-k8s-coredns--668d6bf9bc--glnmd-eth0" Jan 21 23:37:35.739024 containerd[2003]: 2026-01-21 23:37:35.678 [INFO][5139] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-glnmd" WorkloadEndpoint="ip--172--31--29--34-k8s-coredns--668d6bf9bc--glnmd-eth0" Jan 21 23:37:35.739024 containerd[2003]: 2026-01-21 23:37:35.684 [INFO][5139] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-glnmd" WorkloadEndpoint="ip--172--31--29--34-k8s-coredns--668d6bf9bc--glnmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--34-k8s-coredns--668d6bf9bc--glnmd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"eed98923-e571-4f3e-9b7a-fa237350831a", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-34", ContainerID:"37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed", Pod:"coredns-668d6bf9bc-glnmd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86f5d6cbaa8", MAC:"82:d4:d7:ff:32:d1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:37:35.739024 containerd[2003]: 2026-01-21 23:37:35.727 [INFO][5139] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-glnmd" WorkloadEndpoint="ip--172--31--29--34-k8s-coredns--668d6bf9bc--glnmd-eth0" Jan 21 23:37:35.822416 containerd[2003]: time="2026-01-21T23:37:35.822268265Z" level=info msg="connecting to shim 37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed" address="unix:///run/containerd/s/8d1fd3020ed2bb0839690d89c82d88e5fba7bd73dd02dea00447933eda775bc1" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:37:35.822000 audit[5221]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=5221 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:35.822000 audit[5221]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe468a460 a2=0 a3=1 items=0 ppid=3660 pid=5221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.822000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:35.841334 systemd-networkd[1597]: cali7824622c8ff: Link UP Jan 21 23:37:35.844764 systemd-networkd[1597]: cali7824622c8ff: Gained carrier Jan 21 23:37:35.841000 audit[5221]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=5221 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:35.841000 audit[5221]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe468a460 a2=0 a3=1 items=0 ppid=3660 pid=5221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:35.841000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:35.957399 systemd[1]: Started cri-containerd-37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed.scope - libcontainer container 37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed. Jan 21 23:37:35.967997 containerd[2003]: 2026-01-21 23:37:35.315 [INFO][5129] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--g2rqz-eth0 calico-apiserver-84667c98fc- calico-apiserver de79900c-2c4b-48e5-9995-14ed014509c5 890 0 2026-01-21 23:36:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84667c98fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-34 calico-apiserver-84667c98fc-g2rqz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7824622c8ff [] [] }} ContainerID="c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be" Namespace="calico-apiserver" Pod="calico-apiserver-84667c98fc-g2rqz" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--g2rqz-" Jan 21 23:37:35.967997 containerd[2003]: 2026-01-21 23:37:35.318 [INFO][5129] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be" Namespace="calico-apiserver" Pod="calico-apiserver-84667c98fc-g2rqz" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--g2rqz-eth0" Jan 21 23:37:35.967997 containerd[2003]: 2026-01-21 23:37:35.498 [INFO][5178] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be" HandleID="k8s-pod-network.c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be" Workload="ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--g2rqz-eth0" Jan 21 23:37:35.967997 containerd[2003]: 2026-01-21 23:37:35.501 [INFO][5178] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be" HandleID="k8s-pod-network.c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be" Workload="ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--g2rqz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b100), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-34", "pod":"calico-apiserver-84667c98fc-g2rqz", "timestamp":"2026-01-21 23:37:35.49864353 +0000 UTC"}, Hostname:"ip-172-31-29-34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 23:37:35.967997 containerd[2003]: 2026-01-21 23:37:35.501 [INFO][5178] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 23:37:35.967997 containerd[2003]: 2026-01-21 23:37:35.645 [INFO][5178] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 23:37:35.967997 containerd[2003]: 2026-01-21 23:37:35.645 [INFO][5178] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-34' Jan 21 23:37:35.967997 containerd[2003]: 2026-01-21 23:37:35.696 [INFO][5178] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be" host="ip-172-31-29-34" Jan 21 23:37:35.967997 containerd[2003]: 2026-01-21 23:37:35.721 [INFO][5178] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-34" Jan 21 23:37:35.967997 containerd[2003]: 2026-01-21 23:37:35.742 [INFO][5178] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:35.967997 containerd[2003]: 2026-01-21 23:37:35.748 [INFO][5178] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:35.967997 containerd[2003]: 2026-01-21 23:37:35.756 [INFO][5178] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:35.967997 containerd[2003]: 2026-01-21 23:37:35.756 [INFO][5178] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be" host="ip-172-31-29-34" Jan 21 23:37:35.967997 containerd[2003]: 2026-01-21 23:37:35.763 [INFO][5178] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be Jan 21 23:37:35.967997 containerd[2003]: 2026-01-21 23:37:35.780 [INFO][5178] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be" host="ip-172-31-29-34" Jan 21 23:37:35.967997 containerd[2003]: 2026-01-21 23:37:35.810 [INFO][5178] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.133/26] block=192.168.73.128/26 handle="k8s-pod-network.c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be" host="ip-172-31-29-34" Jan 21 23:37:35.967997 containerd[2003]: 2026-01-21 23:37:35.810 [INFO][5178] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.133/26] handle="k8s-pod-network.c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be" host="ip-172-31-29-34" Jan 21 23:37:35.967997 containerd[2003]: 2026-01-21 23:37:35.810 [INFO][5178] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 23:37:35.967997 containerd[2003]: 2026-01-21 23:37:35.810 [INFO][5178] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.133/26] IPv6=[] ContainerID="c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be" HandleID="k8s-pod-network.c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be" Workload="ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--g2rqz-eth0" Jan 21 23:37:35.971365 containerd[2003]: 2026-01-21 23:37:35.827 [INFO][5129] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be" Namespace="calico-apiserver" Pod="calico-apiserver-84667c98fc-g2rqz" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--g2rqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--g2rqz-eth0", GenerateName:"calico-apiserver-84667c98fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"de79900c-2c4b-48e5-9995-14ed014509c5", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 36, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84667c98fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-34", ContainerID:"", Pod:"calico-apiserver-84667c98fc-g2rqz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7824622c8ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:37:35.971365 containerd[2003]: 2026-01-21 23:37:35.828 [INFO][5129] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.133/32] ContainerID="c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be" Namespace="calico-apiserver" Pod="calico-apiserver-84667c98fc-g2rqz" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--g2rqz-eth0" Jan 21 23:37:35.971365 containerd[2003]: 2026-01-21 23:37:35.828 [INFO][5129] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7824622c8ff ContainerID="c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be" Namespace="calico-apiserver" Pod="calico-apiserver-84667c98fc-g2rqz" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--g2rqz-eth0" Jan 21 23:37:35.971365 containerd[2003]: 2026-01-21 23:37:35.848 [INFO][5129] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be" Namespace="calico-apiserver" Pod="calico-apiserver-84667c98fc-g2rqz" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--g2rqz-eth0" Jan 21 23:37:35.971365 containerd[2003]: 2026-01-21 23:37:35.875 [INFO][5129] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be" Namespace="calico-apiserver" Pod="calico-apiserver-84667c98fc-g2rqz" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--g2rqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--g2rqz-eth0", GenerateName:"calico-apiserver-84667c98fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"de79900c-2c4b-48e5-9995-14ed014509c5", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 36, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84667c98fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-34", ContainerID:"c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be", Pod:"calico-apiserver-84667c98fc-g2rqz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7824622c8ff", MAC:"c2:e0:06:31:6e:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:37:35.971365 containerd[2003]: 2026-01-21 23:37:35.956 [INFO][5129] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be" Namespace="calico-apiserver" Pod="calico-apiserver-84667c98fc-g2rqz" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--g2rqz-eth0" Jan 21 23:37:35.987389 systemd-networkd[1597]: cali2473b56ca62: Gained IPv6LL Jan 21 23:37:36.012000 audit[5260]: NETFILTER_CFG table=filter:132 family=2 entries=46 op=nft_register_chain pid=5260 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:37:36.012000 audit[5260]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23196 a0=3 a1=ffffc07db060 a2=0 a3=ffff84fe8fa8 items=0 ppid=4703 pid=5260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.012000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:37:36.032322 containerd[2003]: time="2026-01-21T23:37:36.032249971Z" level=info msg="connecting to shim c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be" address="unix:///run/containerd/s/e37675b48926b613fa04c2625c107ccf44a1e06abeb496c982e04215595c39db" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:37:36.041000 audit: BPF prog-id=233 op=LOAD Jan 21 23:37:36.044000 audit: BPF prog-id=234 op=LOAD Jan 21 23:37:36.044000 audit[5247]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e180 a2=98 a3=0 items=0 ppid=5230 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337313138643733643966616564393262616164366637376630366530 Jan 21 23:37:36.045000 audit: BPF prog-id=234 op=UNLOAD Jan 21 23:37:36.045000 audit[5247]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5230 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.045000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337313138643733643966616564393262616164366637376630366530 Jan 21 23:37:36.048000 audit: BPF prog-id=235 op=LOAD Jan 21 23:37:36.048000 audit[5247]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e3e8 a2=98 a3=0 items=0 ppid=5230 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.048000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337313138643733643966616564393262616164366637376630366530 Jan 21 23:37:36.048000 audit: BPF prog-id=236 op=LOAD Jan 21 23:37:36.048000 audit[5247]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400017e168 a2=98 a3=0 items=0 ppid=5230 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.048000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337313138643733643966616564393262616164366637376630366530 Jan 21 23:37:36.048000 audit: BPF prog-id=236 op=UNLOAD Jan 21 23:37:36.048000 audit[5247]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5230 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.048000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337313138643733643966616564393262616164366637376630366530 Jan 21 23:37:36.048000 audit: BPF prog-id=235 op=UNLOAD Jan 21 23:37:36.048000 audit[5247]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5230 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.048000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337313138643733643966616564393262616164366637376630366530 Jan 21 23:37:36.048000 audit: BPF prog-id=237 op=LOAD Jan 21 23:37:36.048000 audit[5247]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e648 a2=98 a3=0 items=0 ppid=5230 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.048000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337313138643733643966616564393262616164366637376630366530 Jan 21 23:37:36.120000 audit[5306]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=5306 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:36.120000 audit[5306]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd56cae80 a2=0 a3=1 items=0 ppid=3660 pid=5306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.120000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:36.131860 systemd[1]: Started cri-containerd-c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be.scope - libcontainer container c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be. Jan 21 23:37:36.134000 audit[5306]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=5306 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:36.134000 audit[5306]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffd56cae80 a2=0 a3=1 items=0 ppid=3660 pid=5306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.134000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:36.158000 audit[5310]: NETFILTER_CFG table=filter:135 family=2 entries=58 op=nft_register_chain pid=5310 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:37:36.158000 audit[5310]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30568 a0=3 a1=ffffdf579c10 a2=0 a3=ffff8c79afa8 items=0 ppid=4703 pid=5310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.158000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:37:36.178586 containerd[2003]: time="2026-01-21T23:37:36.178440603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-glnmd,Uid:eed98923-e571-4f3e-9b7a-fa237350831a,Namespace:kube-system,Attempt:0,} returns sandbox id \"37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed\"" Jan 21 23:37:36.187427 containerd[2003]: time="2026-01-21T23:37:36.187190898Z" level=info msg="CreateContainer within sandbox \"37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 21 23:37:36.202995 containerd[2003]: time="2026-01-21T23:37:36.202914936Z" level=info msg="Container e4b7b47ef162986e6057de7046b836c8dea4f45ae6334d728542663f4d7e3142: CDI devices from CRI Config.CDIDevices: []" Jan 21 23:37:36.202000 audit: BPF prog-id=238 op=LOAD Jan 21 23:37:36.204000 audit: BPF prog-id=239 op=LOAD Jan 21 23:37:36.204000 audit[5293]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5281 pid=5293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.204000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332363963313430613130313362646230383631616366386134346663 Jan 21 23:37:36.205000 audit: BPF prog-id=239 op=UNLOAD Jan 21 23:37:36.205000 audit[5293]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5281 pid=5293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332363963313430613130313362646230383631616366386134346663 Jan 21 23:37:36.205000 audit: BPF prog-id=240 op=LOAD Jan 21 23:37:36.205000 audit[5293]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5281 pid=5293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332363963313430613130313362646230383631616366386134346663 Jan 21 23:37:36.205000 audit: BPF prog-id=241 op=LOAD Jan 21 23:37:36.205000 audit[5293]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5281 pid=5293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332363963313430613130313362646230383631616366386134346663 Jan 21 23:37:36.205000 audit: BPF prog-id=241 op=UNLOAD Jan 21 23:37:36.205000 audit[5293]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5281 pid=5293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332363963313430613130313362646230383631616366386134346663 Jan 21 23:37:36.205000 audit: BPF prog-id=240 op=UNLOAD Jan 21 23:37:36.205000 audit[5293]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5281 pid=5293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332363963313430613130313362646230383631616366386134346663 Jan 21 23:37:36.206000 audit: BPF prog-id=242 op=LOAD Jan 21 23:37:36.206000 audit[5293]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5281 pid=5293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332363963313430613130313362646230383631616366386134346663 Jan 21 23:37:36.217622 containerd[2003]: time="2026-01-21T23:37:36.217541847Z" level=info msg="CreateContainer within sandbox \"37118d73d9faed92baad6f77f06e00e3844232efe684a82af76d0c50ac5dc1ed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e4b7b47ef162986e6057de7046b836c8dea4f45ae6334d728542663f4d7e3142\"" Jan 21 23:37:36.218681 containerd[2003]: time="2026-01-21T23:37:36.218627556Z" level=info msg="StartContainer for \"e4b7b47ef162986e6057de7046b836c8dea4f45ae6334d728542663f4d7e3142\"" Jan 21 23:37:36.221472 containerd[2003]: time="2026-01-21T23:37:36.221395795Z" level=info msg="connecting to shim e4b7b47ef162986e6057de7046b836c8dea4f45ae6334d728542663f4d7e3142" address="unix:///run/containerd/s/8d1fd3020ed2bb0839690d89c82d88e5fba7bd73dd02dea00447933eda775bc1" protocol=ttrpc version=3 Jan 21 23:37:36.269590 systemd[1]: Started cri-containerd-e4b7b47ef162986e6057de7046b836c8dea4f45ae6334d728542663f4d7e3142.scope - libcontainer container e4b7b47ef162986e6057de7046b836c8dea4f45ae6334d728542663f4d7e3142. Jan 21 23:37:36.310000 audit: BPF prog-id=243 op=LOAD Jan 21 23:37:36.311000 audit: BPF prog-id=244 op=LOAD Jan 21 23:37:36.311000 audit[5321]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=5230 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534623762343765663136323938366536303537646537303436623833 Jan 21 23:37:36.312000 audit: BPF prog-id=244 op=UNLOAD Jan 21 23:37:36.312000 audit[5321]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5230 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534623762343765663136323938366536303537646537303436623833 Jan 21 23:37:36.313000 audit: BPF prog-id=245 op=LOAD Jan 21 23:37:36.313000 audit[5321]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=5230 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534623762343765663136323938366536303537646537303436623833 Jan 21 23:37:36.313000 audit: BPF prog-id=246 op=LOAD Jan 21 23:37:36.313000 audit[5321]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=5230 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534623762343765663136323938366536303537646537303436623833 Jan 21 23:37:36.313000 audit: BPF prog-id=246 op=UNLOAD Jan 21 23:37:36.313000 audit[5321]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5230 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534623762343765663136323938366536303537646537303436623833 Jan 21 23:37:36.314000 audit: BPF prog-id=245 op=UNLOAD Jan 21 23:37:36.314000 audit[5321]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5230 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534623762343765663136323938366536303537646537303436623833 Jan 21 23:37:36.314000 audit: BPF prog-id=247 op=LOAD Jan 21 23:37:36.314000 audit[5321]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=5230 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534623762343765663136323938366536303537646537303436623833 Jan 21 23:37:36.341875 containerd[2003]: time="2026-01-21T23:37:36.341795249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84667c98fc-g2rqz,Uid:de79900c-2c4b-48e5-9995-14ed014509c5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c269c140a1013bdb0861acf8a44fc489378d3f9f4c8853e5fda20a86901862be\"" Jan 21 23:37:36.347372 containerd[2003]: time="2026-01-21T23:37:36.346361341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 23:37:36.364882 containerd[2003]: time="2026-01-21T23:37:36.364715832Z" level=info msg="StartContainer for \"e4b7b47ef162986e6057de7046b836c8dea4f45ae6334d728542663f4d7e3142\" returns successfully" Jan 21 23:37:36.434502 systemd-networkd[1597]: cali0dc6d58d960: Gained IPv6LL Jan 21 23:37:36.555968 kubelet[3515]: E0121 23:37:36.555845 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jq2tn" podUID="c9be5f6a-b238-4709-b078-d405c449b532" Jan 21 23:37:36.656581 kubelet[3515]: I0121 23:37:36.656486 3515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-glnmd" podStartSLOduration=59.656459939 podStartE2EDuration="59.656459939s" podCreationTimestamp="2026-01-21 23:36:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 23:37:36.655062746 +0000 UTC m=+62.957530787" watchObservedRunningTime="2026-01-21 23:37:36.656459939 +0000 UTC m=+62.958927980" Jan 21 23:37:36.682000 audit[5362]: NETFILTER_CFG table=filter:136 family=2 entries=17 op=nft_register_rule pid=5362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:36.682000 audit[5362]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffebb11770 a2=0 a3=1 items=0 ppid=3660 pid=5362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.682000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:36.688488 containerd[2003]: time="2026-01-21T23:37:36.688345185Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:37:36.690535 systemd-networkd[1597]: cali86f5d6cbaa8: Gained IPv6LL Jan 21 23:37:36.691541 containerd[2003]: time="2026-01-21T23:37:36.690802576Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 23:37:36.691541 containerd[2003]: time="2026-01-21T23:37:36.690918701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 23:37:36.693006 kubelet[3515]: E0121 23:37:36.692879 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:37:36.693006 kubelet[3515]: E0121 23:37:36.692949 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:37:36.693222 kubelet[3515]: E0121 23:37:36.693158 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8d8fl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84667c98fc-g2rqz_calico-apiserver(de79900c-2c4b-48e5-9995-14ed014509c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 23:37:36.694760 kubelet[3515]: E0121 23:37:36.694683 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84667c98fc-g2rqz" podUID="de79900c-2c4b-48e5-9995-14ed014509c5" Jan 21 23:37:36.694000 audit[5362]: NETFILTER_CFG table=nat:137 family=2 entries=35 op=nft_register_chain pid=5362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:36.694000 audit[5362]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffebb11770 a2=0 a3=1 items=0 ppid=3660 pid=5362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:36.694000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:37.099110 containerd[2003]: time="2026-01-21T23:37:37.099036034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84667c98fc-fz867,Uid:20617c9b-94b4-4cb3-a6b6-dc13407eb549,Namespace:calico-apiserver,Attempt:0,}" Jan 21 23:37:37.099648 containerd[2003]: time="2026-01-21T23:37:37.099461377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spv6f,Uid:9c98ebba-3094-4d44-b58e-8378134e1be8,Namespace:calico-system,Attempt:0,}" Jan 21 23:37:37.099707 containerd[2003]: time="2026-01-21T23:37:37.099675230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8c7587b9f-qxd5v,Uid:ca6e1be7-3778-4e58-b701-59e16c774819,Namespace:calico-system,Attempt:0,}" Jan 21 23:37:37.529223 systemd-networkd[1597]: calia4a7e5f4012: Link UP Jan 21 23:37:37.531822 systemd-networkd[1597]: calia4a7e5f4012: Gained carrier Jan 21 23:37:37.563647 kubelet[3515]: E0121 23:37:37.563554 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84667c98fc-g2rqz" podUID="de79900c-2c4b-48e5-9995-14ed014509c5" Jan 21 23:37:37.587128 systemd-networkd[1597]: cali7824622c8ff: Gained IPv6LL Jan 21 23:37:37.608146 containerd[2003]: 2026-01-21 23:37:37.268 [INFO][5375] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--34-k8s-calico--kube--controllers--8c7587b9f--qxd5v-eth0 calico-kube-controllers-8c7587b9f- calico-system ca6e1be7-3778-4e58-b701-59e16c774819 884 0 2026-01-21 23:37:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8c7587b9f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-29-34 calico-kube-controllers-8c7587b9f-qxd5v eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia4a7e5f4012 [] [] }} ContainerID="4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054" Namespace="calico-system" Pod="calico-kube-controllers-8c7587b9f-qxd5v" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--kube--controllers--8c7587b9f--qxd5v-" Jan 21 23:37:37.608146 containerd[2003]: 2026-01-21 23:37:37.270 [INFO][5375] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054" Namespace="calico-system" Pod="calico-kube-controllers-8c7587b9f-qxd5v" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--kube--controllers--8c7587b9f--qxd5v-eth0" Jan 21 23:37:37.608146 containerd[2003]: 2026-01-21 23:37:37.388 [INFO][5399] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054" HandleID="k8s-pod-network.4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054" Workload="ip--172--31--29--34-k8s-calico--kube--controllers--8c7587b9f--qxd5v-eth0" Jan 21 23:37:37.608146 containerd[2003]: 2026-01-21 23:37:37.388 [INFO][5399] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054" HandleID="k8s-pod-network.4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054" Workload="ip--172--31--29--34-k8s-calico--kube--controllers--8c7587b9f--qxd5v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003640c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-34", "pod":"calico-kube-controllers-8c7587b9f-qxd5v", "timestamp":"2026-01-21 23:37:37.388563914 +0000 UTC"}, Hostname:"ip-172-31-29-34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 23:37:37.608146 containerd[2003]: 2026-01-21 23:37:37.389 [INFO][5399] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 23:37:37.608146 containerd[2003]: 2026-01-21 23:37:37.389 [INFO][5399] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 23:37:37.608146 containerd[2003]: 2026-01-21 23:37:37.390 [INFO][5399] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-34' Jan 21 23:37:37.608146 containerd[2003]: 2026-01-21 23:37:37.440 [INFO][5399] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054" host="ip-172-31-29-34" Jan 21 23:37:37.608146 containerd[2003]: 2026-01-21 23:37:37.460 [INFO][5399] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-34" Jan 21 23:37:37.608146 containerd[2003]: 2026-01-21 23:37:37.473 [INFO][5399] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:37.608146 containerd[2003]: 2026-01-21 23:37:37.476 [INFO][5399] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:37.608146 containerd[2003]: 2026-01-21 23:37:37.481 [INFO][5399] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:37.608146 containerd[2003]: 2026-01-21 23:37:37.481 [INFO][5399] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054" host="ip-172-31-29-34" Jan 21 23:37:37.608146 containerd[2003]: 2026-01-21 23:37:37.486 [INFO][5399] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054 Jan 21 23:37:37.608146 containerd[2003]: 2026-01-21 23:37:37.495 [INFO][5399] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054" host="ip-172-31-29-34" Jan 21 23:37:37.608146 containerd[2003]: 2026-01-21 23:37:37.507 [INFO][5399] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.134/26] block=192.168.73.128/26 handle="k8s-pod-network.4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054" host="ip-172-31-29-34" Jan 21 23:37:37.608146 containerd[2003]: 2026-01-21 23:37:37.508 [INFO][5399] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.134/26] handle="k8s-pod-network.4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054" host="ip-172-31-29-34" Jan 21 23:37:37.608146 containerd[2003]: 2026-01-21 23:37:37.508 [INFO][5399] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 23:37:37.608146 containerd[2003]: 2026-01-21 23:37:37.508 [INFO][5399] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.134/26] IPv6=[] ContainerID="4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054" HandleID="k8s-pod-network.4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054" Workload="ip--172--31--29--34-k8s-calico--kube--controllers--8c7587b9f--qxd5v-eth0" Jan 21 23:37:37.609341 containerd[2003]: 2026-01-21 23:37:37.515 [INFO][5375] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054" Namespace="calico-system" Pod="calico-kube-controllers-8c7587b9f-qxd5v" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--kube--controllers--8c7587b9f--qxd5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--34-k8s-calico--kube--controllers--8c7587b9f--qxd5v-eth0", GenerateName:"calico-kube-controllers-8c7587b9f-", Namespace:"calico-system", SelfLink:"", UID:"ca6e1be7-3778-4e58-b701-59e16c774819", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 37, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8c7587b9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-34", ContainerID:"", Pod:"calico-kube-controllers-8c7587b9f-qxd5v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia4a7e5f4012", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:37:37.609341 containerd[2003]: 2026-01-21 23:37:37.516 [INFO][5375] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.134/32] ContainerID="4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054" Namespace="calico-system" Pod="calico-kube-controllers-8c7587b9f-qxd5v" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--kube--controllers--8c7587b9f--qxd5v-eth0" Jan 21 23:37:37.609341 containerd[2003]: 2026-01-21 23:37:37.516 [INFO][5375] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4a7e5f4012 ContainerID="4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054" Namespace="calico-system" Pod="calico-kube-controllers-8c7587b9f-qxd5v" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--kube--controllers--8c7587b9f--qxd5v-eth0" Jan 21 23:37:37.609341 containerd[2003]: 2026-01-21 23:37:37.537 [INFO][5375] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054" Namespace="calico-system" Pod="calico-kube-controllers-8c7587b9f-qxd5v" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--kube--controllers--8c7587b9f--qxd5v-eth0" Jan 21 23:37:37.609341 containerd[2003]: 2026-01-21 23:37:37.541 [INFO][5375] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054" Namespace="calico-system" Pod="calico-kube-controllers-8c7587b9f-qxd5v" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--kube--controllers--8c7587b9f--qxd5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--34-k8s-calico--kube--controllers--8c7587b9f--qxd5v-eth0", GenerateName:"calico-kube-controllers-8c7587b9f-", Namespace:"calico-system", SelfLink:"", UID:"ca6e1be7-3778-4e58-b701-59e16c774819", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 37, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8c7587b9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-34", ContainerID:"4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054", Pod:"calico-kube-controllers-8c7587b9f-qxd5v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia4a7e5f4012", MAC:"ca:39:9e:c5:31:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:37:37.609341 containerd[2003]: 2026-01-21 23:37:37.600 [INFO][5375] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054" Namespace="calico-system" Pod="calico-kube-controllers-8c7587b9f-qxd5v" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--kube--controllers--8c7587b9f--qxd5v-eth0" Jan 21 23:37:37.682614 containerd[2003]: time="2026-01-21T23:37:37.682557461Z" level=info msg="connecting to shim 4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054" address="unix:///run/containerd/s/25554cadce8e85ed3416e39edbbaa5447955a26089698acac8897d31b217c672" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:37:37.788407 systemd[1]: Started cri-containerd-4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054.scope - libcontainer container 4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054. Jan 21 23:37:37.877151 systemd-networkd[1597]: cali76c8e6cd4cd: Link UP Jan 21 23:37:37.877542 systemd-networkd[1597]: cali76c8e6cd4cd: Gained carrier Jan 21 23:37:37.922013 containerd[2003]: 2026-01-21 23:37:37.333 [INFO][5366] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--34-k8s-csi--node--driver--spv6f-eth0 csi-node-driver- calico-system 9c98ebba-3094-4d44-b58e-8378134e1be8 802 0 2026-01-21 23:37:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-29-34 csi-node-driver-spv6f eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali76c8e6cd4cd [] [] }} ContainerID="a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab" Namespace="calico-system" Pod="csi-node-driver-spv6f" WorkloadEndpoint="ip--172--31--29--34-k8s-csi--node--driver--spv6f-" Jan 21 23:37:37.922013 containerd[2003]: 2026-01-21 23:37:37.334 [INFO][5366] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab" Namespace="calico-system" Pod="csi-node-driver-spv6f" WorkloadEndpoint="ip--172--31--29--34-k8s-csi--node--driver--spv6f-eth0" Jan 21 23:37:37.922013 containerd[2003]: 2026-01-21 23:37:37.447 [INFO][5407] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab" HandleID="k8s-pod-network.a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab" Workload="ip--172--31--29--34-k8s-csi--node--driver--spv6f-eth0" Jan 21 23:37:37.922013 containerd[2003]: 2026-01-21 23:37:37.448 [INFO][5407] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab" HandleID="k8s-pod-network.a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab" Workload="ip--172--31--29--34-k8s-csi--node--driver--spv6f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400036d050), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-34", "pod":"csi-node-driver-spv6f", "timestamp":"2026-01-21 23:37:37.447322053 +0000 UTC"}, Hostname:"ip-172-31-29-34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 23:37:37.922013 containerd[2003]: 2026-01-21 23:37:37.449 [INFO][5407] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 23:37:37.922013 containerd[2003]: 2026-01-21 23:37:37.508 [INFO][5407] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 23:37:37.922013 containerd[2003]: 2026-01-21 23:37:37.508 [INFO][5407] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-34' Jan 21 23:37:37.922013 containerd[2003]: 2026-01-21 23:37:37.545 [INFO][5407] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab" host="ip-172-31-29-34" Jan 21 23:37:37.922013 containerd[2003]: 2026-01-21 23:37:37.576 [INFO][5407] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-34" Jan 21 23:37:37.922013 containerd[2003]: 2026-01-21 23:37:37.642 [INFO][5407] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:37.922013 containerd[2003]: 2026-01-21 23:37:37.677 [INFO][5407] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:37.922013 containerd[2003]: 2026-01-21 23:37:37.714 [INFO][5407] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:37.922013 containerd[2003]: 2026-01-21 23:37:37.714 [INFO][5407] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab" host="ip-172-31-29-34" Jan 21 23:37:37.922013 containerd[2003]: 2026-01-21 23:37:37.727 [INFO][5407] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab Jan 21 23:37:37.922013 containerd[2003]: 2026-01-21 23:37:37.774 [INFO][5407] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab" host="ip-172-31-29-34" Jan 21 23:37:37.922013 containerd[2003]: 2026-01-21 23:37:37.833 [INFO][5407] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.135/26] block=192.168.73.128/26 handle="k8s-pod-network.a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab" host="ip-172-31-29-34" Jan 21 23:37:37.922013 containerd[2003]: 2026-01-21 23:37:37.833 [INFO][5407] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.135/26] handle="k8s-pod-network.a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab" host="ip-172-31-29-34" Jan 21 23:37:37.922013 containerd[2003]: 2026-01-21 23:37:37.833 [INFO][5407] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 23:37:37.922013 containerd[2003]: 2026-01-21 23:37:37.834 [INFO][5407] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.135/26] IPv6=[] ContainerID="a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab" HandleID="k8s-pod-network.a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab" Workload="ip--172--31--29--34-k8s-csi--node--driver--spv6f-eth0" Jan 21 23:37:37.925761 containerd[2003]: 2026-01-21 23:37:37.845 [INFO][5366] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab" Namespace="calico-system" Pod="csi-node-driver-spv6f" WorkloadEndpoint="ip--172--31--29--34-k8s-csi--node--driver--spv6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--34-k8s-csi--node--driver--spv6f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c98ebba-3094-4d44-b58e-8378134e1be8", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 37, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-34", ContainerID:"", Pod:"csi-node-driver-spv6f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.73.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali76c8e6cd4cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:37:37.925761 containerd[2003]: 2026-01-21 23:37:37.845 [INFO][5366] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.135/32] ContainerID="a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab" Namespace="calico-system" Pod="csi-node-driver-spv6f" WorkloadEndpoint="ip--172--31--29--34-k8s-csi--node--driver--spv6f-eth0" Jan 21 23:37:37.925761 containerd[2003]: 2026-01-21 23:37:37.845 [INFO][5366] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76c8e6cd4cd ContainerID="a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab" Namespace="calico-system" Pod="csi-node-driver-spv6f" WorkloadEndpoint="ip--172--31--29--34-k8s-csi--node--driver--spv6f-eth0" Jan 21 23:37:37.925761 containerd[2003]: 2026-01-21 23:37:37.876 [INFO][5366] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab" Namespace="calico-system" Pod="csi-node-driver-spv6f" WorkloadEndpoint="ip--172--31--29--34-k8s-csi--node--driver--spv6f-eth0" Jan 21 23:37:37.925761 containerd[2003]: 2026-01-21 23:37:37.882 [INFO][5366] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab" Namespace="calico-system" Pod="csi-node-driver-spv6f" WorkloadEndpoint="ip--172--31--29--34-k8s-csi--node--driver--spv6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--34-k8s-csi--node--driver--spv6f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c98ebba-3094-4d44-b58e-8378134e1be8", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 37, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-34", ContainerID:"a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab", Pod:"csi-node-driver-spv6f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.73.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali76c8e6cd4cd", MAC:"0a:60:4d:cf:30:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:37:37.925761 containerd[2003]: 2026-01-21 23:37:37.915 [INFO][5366] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab" Namespace="calico-system" Pod="csi-node-driver-spv6f" WorkloadEndpoint="ip--172--31--29--34-k8s-csi--node--driver--spv6f-eth0" Jan 21 23:37:38.014943 containerd[2003]: time="2026-01-21T23:37:38.014888293Z" level=info msg="connecting to shim a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab" address="unix:///run/containerd/s/bedce3ebde210387520f738f2633dd77c2b411a528c1acd32f3c37342e434243" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:37:38.116202 systemd[1]: Started cri-containerd-a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab.scope - libcontainer container a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab. Jan 21 23:37:38.117694 containerd[2003]: time="2026-01-21T23:37:38.117330019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5468c6d76d-ffj6z,Uid:920c851c-0448-41e7-8aac-ea1379198aa5,Namespace:calico-apiserver,Attempt:0,}" Jan 21 23:37:38.124297 systemd-networkd[1597]: calia6668c10e78: Link UP Jan 21 23:37:38.121000 audit[5511]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=5511 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:38.121000 audit[5511]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffc4d9730 a2=0 a3=1 items=0 ppid=3660 pid=5511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.121000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:38.128745 systemd-networkd[1597]: calia6668c10e78: Gained carrier Jan 21 23:37:38.133000 audit[5511]: NETFILTER_CFG table=nat:139 family=2 entries=44 op=nft_register_rule pid=5511 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:38.133000 audit[5511]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffffc4d9730 a2=0 a3=1 items=0 ppid=3660 pid=5511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.133000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:38.199186 containerd[2003]: 2026-01-21 23:37:37.342 [INFO][5363] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--fz867-eth0 calico-apiserver-84667c98fc- calico-apiserver 20617c9b-94b4-4cb3-a6b6-dc13407eb549 894 0 2026-01-21 23:36:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84667c98fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-34 calico-apiserver-84667c98fc-fz867 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia6668c10e78 [] [] }} ContainerID="f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef" Namespace="calico-apiserver" Pod="calico-apiserver-84667c98fc-fz867" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--fz867-" Jan 21 23:37:38.199186 containerd[2003]: 2026-01-21 23:37:37.343 [INFO][5363] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef" Namespace="calico-apiserver" Pod="calico-apiserver-84667c98fc-fz867" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--fz867-eth0" Jan 21 23:37:38.199186 containerd[2003]: 2026-01-21 23:37:37.482 [INFO][5412] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef" HandleID="k8s-pod-network.f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef" Workload="ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--fz867-eth0" Jan 21 23:37:38.199186 containerd[2003]: 2026-01-21 23:37:37.483 [INFO][5412] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef" HandleID="k8s-pod-network.f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef" Workload="ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--fz867-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003301d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-34", "pod":"calico-apiserver-84667c98fc-fz867", "timestamp":"2026-01-21 23:37:37.482936509 +0000 UTC"}, Hostname:"ip-172-31-29-34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 23:37:38.199186 containerd[2003]: 2026-01-21 23:37:37.483 [INFO][5412] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 23:37:38.199186 containerd[2003]: 2026-01-21 23:37:37.833 [INFO][5412] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 23:37:38.199186 containerd[2003]: 2026-01-21 23:37:37.834 [INFO][5412] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-34' Jan 21 23:37:38.199186 containerd[2003]: 2026-01-21 23:37:37.896 [INFO][5412] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef" host="ip-172-31-29-34" Jan 21 23:37:38.199186 containerd[2003]: 2026-01-21 23:37:37.925 [INFO][5412] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-34" Jan 21 23:37:38.199186 containerd[2003]: 2026-01-21 23:37:37.968 [INFO][5412] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:38.199186 containerd[2003]: 2026-01-21 23:37:37.987 [INFO][5412] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:38.199186 containerd[2003]: 2026-01-21 23:37:38.002 [INFO][5412] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:38.199186 containerd[2003]: 2026-01-21 23:37:38.002 [INFO][5412] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef" host="ip-172-31-29-34" Jan 21 23:37:38.199186 containerd[2003]: 2026-01-21 23:37:38.018 [INFO][5412] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef Jan 21 23:37:38.199186 containerd[2003]: 2026-01-21 23:37:38.035 [INFO][5412] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef" host="ip-172-31-29-34" Jan 21 23:37:38.199186 containerd[2003]: 2026-01-21 23:37:38.069 [INFO][5412] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.136/26] block=192.168.73.128/26 handle="k8s-pod-network.f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef" host="ip-172-31-29-34" Jan 21 23:37:38.199186 containerd[2003]: 2026-01-21 23:37:38.071 [INFO][5412] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.136/26] handle="k8s-pod-network.f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef" host="ip-172-31-29-34" Jan 21 23:37:38.199186 containerd[2003]: 2026-01-21 23:37:38.072 [INFO][5412] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 23:37:38.199186 containerd[2003]: 2026-01-21 23:37:38.074 [INFO][5412] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.136/26] IPv6=[] ContainerID="f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef" HandleID="k8s-pod-network.f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef" Workload="ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--fz867-eth0" Jan 21 23:37:38.204273 containerd[2003]: 2026-01-21 23:37:38.093 [INFO][5363] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef" Namespace="calico-apiserver" Pod="calico-apiserver-84667c98fc-fz867" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--fz867-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--fz867-eth0", GenerateName:"calico-apiserver-84667c98fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"20617c9b-94b4-4cb3-a6b6-dc13407eb549", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 36, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84667c98fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-34", ContainerID:"", Pod:"calico-apiserver-84667c98fc-fz867", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia6668c10e78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:37:38.204273 containerd[2003]: 2026-01-21 23:37:38.093 [INFO][5363] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.136/32] ContainerID="f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef" Namespace="calico-apiserver" Pod="calico-apiserver-84667c98fc-fz867" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--fz867-eth0" Jan 21 23:37:38.204273 containerd[2003]: 2026-01-21 23:37:38.093 [INFO][5363] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6668c10e78 ContainerID="f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef" Namespace="calico-apiserver" Pod="calico-apiserver-84667c98fc-fz867" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--fz867-eth0" Jan 21 23:37:38.204273 containerd[2003]: 2026-01-21 23:37:38.131 [INFO][5363] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef" Namespace="calico-apiserver" Pod="calico-apiserver-84667c98fc-fz867" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--fz867-eth0" Jan 21 23:37:38.204273 containerd[2003]: 2026-01-21 23:37:38.137 [INFO][5363] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef" Namespace="calico-apiserver" Pod="calico-apiserver-84667c98fc-fz867" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--fz867-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--fz867-eth0", GenerateName:"calico-apiserver-84667c98fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"20617c9b-94b4-4cb3-a6b6-dc13407eb549", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 36, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84667c98fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-34", ContainerID:"f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef", Pod:"calico-apiserver-84667c98fc-fz867", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia6668c10e78", MAC:"46:c9:18:5e:b9:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:37:38.204273 containerd[2003]: 2026-01-21 23:37:38.186 [INFO][5363] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef" Namespace="calico-apiserver" Pod="calico-apiserver-84667c98fc-fz867" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--apiserver--84667c98fc--fz867-eth0" Jan 21 23:37:38.250000 audit[5534]: NETFILTER_CFG table=filter:140 family=2 entries=82 op=nft_register_chain pid=5534 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:37:38.250000 audit[5534]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=42652 a0=3 a1=ffffd6560cb0 a2=0 a3=ffff8f731fa8 items=0 ppid=4703 pid=5534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.250000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:37:38.265000 audit: BPF prog-id=248 op=LOAD Jan 21 23:37:38.266000 audit: BPF prog-id=249 op=LOAD Jan 21 23:37:38.266000 audit[5451]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5438 pid=5451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436383663623663663863326565356130323364363365326239616138 Jan 21 23:37:38.267000 audit: BPF prog-id=249 op=UNLOAD Jan 21 23:37:38.267000 audit[5451]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5438 pid=5451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.267000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436383663623663663863326565356130323364363365326239616138 Jan 21 23:37:38.267000 audit: BPF prog-id=250 op=LOAD Jan 21 23:37:38.267000 audit[5451]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5438 pid=5451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.267000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436383663623663663863326565356130323364363365326239616138 Jan 21 23:37:38.268000 audit: BPF prog-id=251 op=LOAD Jan 21 23:37:38.268000 audit[5451]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5438 pid=5451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436383663623663663863326565356130323364363365326239616138 Jan 21 23:37:38.269000 audit: BPF prog-id=251 op=UNLOAD Jan 21 23:37:38.269000 audit[5451]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5438 pid=5451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436383663623663663863326565356130323364363365326239616138 Jan 21 23:37:38.269000 audit: BPF prog-id=250 op=UNLOAD Jan 21 23:37:38.269000 audit[5451]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5438 pid=5451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436383663623663663863326565356130323364363365326239616138 Jan 21 23:37:38.271000 audit: BPF prog-id=252 op=LOAD Jan 21 23:37:38.271000 audit[5451]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5438 pid=5451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.271000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436383663623663663863326565356130323364363365326239616138 Jan 21 23:37:38.283000 audit: BPF prog-id=253 op=LOAD Jan 21 23:37:38.285000 audit: BPF prog-id=254 op=LOAD Jan 21 23:37:38.285000 audit[5493]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5482 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.285000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135356365303439393033313463663033656565393438326634343835 Jan 21 23:37:38.286000 audit: BPF prog-id=254 op=UNLOAD Jan 21 23:37:38.286000 audit[5493]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5482 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135356365303439393033313463663033656565393438326634343835 Jan 21 23:37:38.288000 audit: BPF prog-id=255 op=LOAD Jan 21 23:37:38.288000 audit[5493]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5482 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135356365303439393033313463663033656565393438326634343835 Jan 21 23:37:38.288000 audit: BPF prog-id=256 op=LOAD Jan 21 23:37:38.288000 audit[5493]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5482 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135356365303439393033313463663033656565393438326634343835 Jan 21 23:37:38.290000 audit: BPF prog-id=256 op=UNLOAD Jan 21 23:37:38.290000 audit[5493]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5482 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.290000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135356365303439393033313463663033656565393438326634343835 Jan 21 23:37:38.291000 audit: BPF prog-id=255 op=UNLOAD Jan 21 23:37:38.291000 audit[5493]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5482 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135356365303439393033313463663033656565393438326634343835 Jan 21 23:37:38.292000 audit: BPF prog-id=257 op=LOAD Jan 21 23:37:38.292000 audit[5493]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5482 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135356365303439393033313463663033656565393438326634343835 Jan 21 23:37:38.300025 containerd[2003]: time="2026-01-21T23:37:38.299884709Z" level=info msg="connecting to shim f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef" address="unix:///run/containerd/s/03bf9a61a293294f071f75be67a70817eeda2c61bd105ab9ccdb7d7c3f12f8e5" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:37:38.406353 systemd[1]: Started cri-containerd-f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef.scope - libcontainer container f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef. Jan 21 23:37:38.440528 containerd[2003]: time="2026-01-21T23:37:38.440445022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spv6f,Uid:9c98ebba-3094-4d44-b58e-8378134e1be8,Namespace:calico-system,Attempt:0,} returns sandbox id \"a55ce04990314cf03eee9482f44853647a3890352738b5d43c24743591a02cab\"" Jan 21 23:37:38.457902 containerd[2003]: time="2026-01-21T23:37:38.457401096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 21 23:37:38.474000 audit[5593]: NETFILTER_CFG table=filter:141 family=2 entries=59 op=nft_register_chain pid=5593 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:37:38.474000 audit[5593]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29460 a0=3 a1=ffffdc427050 a2=0 a3=ffffb24f2fa8 items=0 ppid=4703 pid=5593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.474000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:37:38.487000 audit: BPF prog-id=258 op=LOAD Jan 21 23:37:38.493000 audit: BPF prog-id=259 op=LOAD Jan 21 23:37:38.493000 audit[5561]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=5547 pid=5561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634653764383666373161386163633330393864383738313839623862 Jan 21 23:37:38.493000 audit: BPF prog-id=259 op=UNLOAD Jan 21 23:37:38.493000 audit[5561]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5547 pid=5561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634653764383666373161386163633330393864383738313839623862 Jan 21 23:37:38.494000 audit: BPF prog-id=260 op=LOAD Jan 21 23:37:38.494000 audit[5561]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=5547 pid=5561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634653764383666373161386163633330393864383738313839623862 Jan 21 23:37:38.494000 audit: BPF prog-id=261 op=LOAD Jan 21 23:37:38.494000 audit[5561]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=5547 pid=5561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634653764383666373161386163633330393864383738313839623862 Jan 21 23:37:38.495000 audit: BPF prog-id=261 op=UNLOAD Jan 21 23:37:38.495000 audit[5561]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5547 pid=5561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.495000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634653764383666373161386163633330393864383738313839623862 Jan 21 23:37:38.495000 audit: BPF prog-id=260 op=UNLOAD Jan 21 23:37:38.495000 audit[5561]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5547 pid=5561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.495000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634653764383666373161386163633330393864383738313839623862 Jan 21 23:37:38.495000 audit: BPF prog-id=262 op=LOAD Jan 21 23:37:38.495000 audit[5561]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=5547 pid=5561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.495000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634653764383666373161386163633330393864383738313839623862 Jan 21 23:37:38.525218 containerd[2003]: time="2026-01-21T23:37:38.525166236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8c7587b9f-qxd5v,Uid:ca6e1be7-3778-4e58-b701-59e16c774819,Namespace:calico-system,Attempt:0,} returns sandbox id \"4686cb6cf8c2ee5a023d63e2b9aa892bb6bd9e4f478aa1c404e85a8bff3e3054\"" Jan 21 23:37:38.606955 containerd[2003]: time="2026-01-21T23:37:38.606485197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84667c98fc-fz867,Uid:20617c9b-94b4-4cb3-a6b6-dc13407eb549,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f4e7d86f71a8acc3098d878189b8bfc8f316ccf4a4eace92bce071ac1c0822ef\"" Jan 21 23:37:38.665501 systemd-networkd[1597]: cali6aebd44ffd7: Link UP Jan 21 23:37:38.677470 systemd-networkd[1597]: cali6aebd44ffd7: Gained carrier Jan 21 23:37:38.706449 containerd[2003]: 2026-01-21 23:37:38.331 [INFO][5514] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--34-k8s-calico--apiserver--5468c6d76d--ffj6z-eth0 calico-apiserver-5468c6d76d- calico-apiserver 920c851c-0448-41e7-8aac-ea1379198aa5 891 0 2026-01-21 23:36:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5468c6d76d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-34 calico-apiserver-5468c6d76d-ffj6z eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6aebd44ffd7 [] [] }} ContainerID="4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a" Namespace="calico-apiserver" Pod="calico-apiserver-5468c6d76d-ffj6z" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--apiserver--5468c6d76d--ffj6z-" Jan 21 23:37:38.706449 containerd[2003]: 2026-01-21 23:37:38.333 [INFO][5514] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a" Namespace="calico-apiserver" Pod="calico-apiserver-5468c6d76d-ffj6z" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--apiserver--5468c6d76d--ffj6z-eth0" Jan 21 23:37:38.706449 containerd[2003]: 2026-01-21 23:37:38.533 [INFO][5571] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a" HandleID="k8s-pod-network.4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a" Workload="ip--172--31--29--34-k8s-calico--apiserver--5468c6d76d--ffj6z-eth0" Jan 21 23:37:38.706449 containerd[2003]: 2026-01-21 23:37:38.533 [INFO][5571] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a" HandleID="k8s-pod-network.4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a" Workload="ip--172--31--29--34-k8s-calico--apiserver--5468c6d76d--ffj6z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dd60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-34", "pod":"calico-apiserver-5468c6d76d-ffj6z", "timestamp":"2026-01-21 23:37:38.533481976 +0000 UTC"}, Hostname:"ip-172-31-29-34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 23:37:38.706449 containerd[2003]: 2026-01-21 23:37:38.534 [INFO][5571] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 23:37:38.706449 containerd[2003]: 2026-01-21 23:37:38.534 [INFO][5571] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 23:37:38.706449 containerd[2003]: 2026-01-21 23:37:38.534 [INFO][5571] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-34' Jan 21 23:37:38.706449 containerd[2003]: 2026-01-21 23:37:38.550 [INFO][5571] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a" host="ip-172-31-29-34" Jan 21 23:37:38.706449 containerd[2003]: 2026-01-21 23:37:38.566 [INFO][5571] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-34" Jan 21 23:37:38.706449 containerd[2003]: 2026-01-21 23:37:38.587 [INFO][5571] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:38.706449 containerd[2003]: 2026-01-21 23:37:38.592 [INFO][5571] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:38.706449 containerd[2003]: 2026-01-21 23:37:38.599 [INFO][5571] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="ip-172-31-29-34" Jan 21 23:37:38.706449 containerd[2003]: 2026-01-21 23:37:38.599 [INFO][5571] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a" host="ip-172-31-29-34" Jan 21 23:37:38.706449 containerd[2003]: 2026-01-21 23:37:38.607 [INFO][5571] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a Jan 21 23:37:38.706449 containerd[2003]: 2026-01-21 23:37:38.618 [INFO][5571] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a" host="ip-172-31-29-34" Jan 21 23:37:38.706449 containerd[2003]: 2026-01-21 23:37:38.640 [INFO][5571] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.137/26] block=192.168.73.128/26 handle="k8s-pod-network.4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a" host="ip-172-31-29-34" Jan 21 23:37:38.706449 containerd[2003]: 2026-01-21 23:37:38.641 [INFO][5571] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.137/26] handle="k8s-pod-network.4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a" host="ip-172-31-29-34" Jan 21 23:37:38.706449 containerd[2003]: 2026-01-21 23:37:38.641 [INFO][5571] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 23:37:38.706449 containerd[2003]: 2026-01-21 23:37:38.641 [INFO][5571] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.137/26] IPv6=[] ContainerID="4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a" HandleID="k8s-pod-network.4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a" Workload="ip--172--31--29--34-k8s-calico--apiserver--5468c6d76d--ffj6z-eth0" Jan 21 23:37:38.707670 containerd[2003]: 2026-01-21 23:37:38.646 [INFO][5514] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a" Namespace="calico-apiserver" Pod="calico-apiserver-5468c6d76d-ffj6z" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--apiserver--5468c6d76d--ffj6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--34-k8s-calico--apiserver--5468c6d76d--ffj6z-eth0", GenerateName:"calico-apiserver-5468c6d76d-", Namespace:"calico-apiserver", SelfLink:"", UID:"920c851c-0448-41e7-8aac-ea1379198aa5", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 36, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5468c6d76d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-34", ContainerID:"", Pod:"calico-apiserver-5468c6d76d-ffj6z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6aebd44ffd7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:37:38.707670 containerd[2003]: 2026-01-21 23:37:38.646 [INFO][5514] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.137/32] ContainerID="4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a" Namespace="calico-apiserver" Pod="calico-apiserver-5468c6d76d-ffj6z" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--apiserver--5468c6d76d--ffj6z-eth0" Jan 21 23:37:38.707670 containerd[2003]: 2026-01-21 23:37:38.646 [INFO][5514] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6aebd44ffd7 ContainerID="4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a" Namespace="calico-apiserver" Pod="calico-apiserver-5468c6d76d-ffj6z" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--apiserver--5468c6d76d--ffj6z-eth0" Jan 21 23:37:38.707670 containerd[2003]: 2026-01-21 23:37:38.678 [INFO][5514] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a" Namespace="calico-apiserver" Pod="calico-apiserver-5468c6d76d-ffj6z" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--apiserver--5468c6d76d--ffj6z-eth0" Jan 21 23:37:38.707670 containerd[2003]: 2026-01-21 23:37:38.679 [INFO][5514] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a" Namespace="calico-apiserver" Pod="calico-apiserver-5468c6d76d-ffj6z" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--apiserver--5468c6d76d--ffj6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--34-k8s-calico--apiserver--5468c6d76d--ffj6z-eth0", GenerateName:"calico-apiserver-5468c6d76d-", Namespace:"calico-apiserver", SelfLink:"", UID:"920c851c-0448-41e7-8aac-ea1379198aa5", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 23, 36, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5468c6d76d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-34", ContainerID:"4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a", Pod:"calico-apiserver-5468c6d76d-ffj6z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6aebd44ffd7", MAC:"c2:82:f9:bf:58:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 23:37:38.707670 containerd[2003]: 2026-01-21 23:37:38.697 [INFO][5514] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a" Namespace="calico-apiserver" Pod="calico-apiserver-5468c6d76d-ffj6z" WorkloadEndpoint="ip--172--31--29--34-k8s-calico--apiserver--5468c6d76d--ffj6z-eth0" Jan 21 23:37:38.752000 audit[5618]: NETFILTER_CFG table=filter:142 family=2 entries=53 op=nft_register_chain pid=5618 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 23:37:38.752000 audit[5618]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26592 a0=3 a1=ffffe5853f70 a2=0 a3=ffff9b5c8fa8 items=0 ppid=4703 pid=5618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.752000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 23:37:38.764231 containerd[2003]: time="2026-01-21T23:37:38.764156390Z" level=info msg="connecting to shim 4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a" address="unix:///run/containerd/s/23bbfd27a020e3371e28ad069891ca04d8204a0d64b7145ef65056ba67dee2eb" namespace=k8s.io protocol=ttrpc version=3 Jan 21 23:37:38.780677 containerd[2003]: time="2026-01-21T23:37:38.780624440Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:37:38.788317 containerd[2003]: time="2026-01-21T23:37:38.788197524Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 21 23:37:38.793081 containerd[2003]: time="2026-01-21T23:37:38.788623323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 21 23:37:38.793081 containerd[2003]: time="2026-01-21T23:37:38.792056941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 21 23:37:38.793718 kubelet[3515]: E0121 23:37:38.788894 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 23:37:38.793718 kubelet[3515]: E0121 23:37:38.788955 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 23:37:38.793718 kubelet[3515]: E0121 23:37:38.789318 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxswv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-spv6f_calico-system(9c98ebba-3094-4d44-b58e-8378134e1be8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 21 23:37:38.836116 systemd[1]: Started cri-containerd-4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a.scope - libcontainer container 4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a. Jan 21 23:37:38.887000 audit: BPF prog-id=263 op=LOAD Jan 21 23:37:38.888000 audit: BPF prog-id=264 op=LOAD Jan 21 23:37:38.888000 audit[5637]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=5626 pid=5637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461343537653339346439356435633065643534646636613366306539 Jan 21 23:37:38.888000 audit: BPF prog-id=264 op=UNLOAD Jan 21 23:37:38.888000 audit[5637]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5626 pid=5637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461343537653339346439356435633065643534646636613366306539 Jan 21 23:37:38.888000 audit: BPF prog-id=265 op=LOAD Jan 21 23:37:38.888000 audit[5637]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=5626 pid=5637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461343537653339346439356435633065643534646636613366306539 Jan 21 23:37:38.889000 audit: BPF prog-id=266 op=LOAD Jan 21 23:37:38.889000 audit[5637]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=5626 pid=5637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461343537653339346439356435633065643534646636613366306539 Jan 21 23:37:38.889000 audit: BPF prog-id=266 op=UNLOAD Jan 21 23:37:38.889000 audit[5637]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5626 pid=5637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461343537653339346439356435633065643534646636613366306539 Jan 21 23:37:38.889000 audit: BPF prog-id=265 op=UNLOAD Jan 21 23:37:38.889000 audit[5637]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5626 pid=5637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461343537653339346439356435633065643534646636613366306539 Jan 21 23:37:38.889000 audit: BPF prog-id=267 op=LOAD Jan 21 23:37:38.889000 audit[5637]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=5626 pid=5637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:38.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461343537653339346439356435633065643534646636613366306539 Jan 21 23:37:38.972684 containerd[2003]: time="2026-01-21T23:37:38.971839685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5468c6d76d-ffj6z,Uid:920c851c-0448-41e7-8aac-ea1379198aa5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4a457e394d95d5c0ed54df6a3f0e9e750ef2870111a364c75742364bc584300a\"" Jan 21 23:37:39.090184 containerd[2003]: time="2026-01-21T23:37:39.089913155Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:37:39.092256 containerd[2003]: time="2026-01-21T23:37:39.092189904Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 21 23:37:39.092600 containerd[2003]: time="2026-01-21T23:37:39.092391739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 21 23:37:39.094485 kubelet[3515]: E0121 23:37:39.093291 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 23:37:39.094485 kubelet[3515]: E0121 23:37:39.093358 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 23:37:39.094825 containerd[2003]: time="2026-01-21T23:37:39.094739409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 23:37:39.096207 kubelet[3515]: E0121 23:37:39.095170 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bkmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8c7587b9f-qxd5v_calico-system(ca6e1be7-3778-4e58-b701-59e16c774819): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 21 23:37:39.097992 kubelet[3515]: E0121 23:37:39.097889 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c7587b9f-qxd5v" podUID="ca6e1be7-3778-4e58-b701-59e16c774819" Jan 21 23:37:39.213000 audit[5670]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5670 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:39.213000 audit[5670]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdd7023b0 a2=0 a3=1 items=0 ppid=3660 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:39.213000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:39.227000 audit[5670]: NETFILTER_CFG table=nat:144 family=2 entries=56 op=nft_register_chain pid=5670 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:39.227000 audit[5670]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffdd7023b0 a2=0 a3=1 items=0 ppid=3660 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:39.227000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:39.315158 systemd-networkd[1597]: cali76c8e6cd4cd: Gained IPv6LL Jan 21 23:37:39.364660 containerd[2003]: time="2026-01-21T23:37:39.364595078Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:37:39.367743 containerd[2003]: time="2026-01-21T23:37:39.367448919Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 23:37:39.367743 containerd[2003]: time="2026-01-21T23:37:39.367526928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 23:37:39.368145 kubelet[3515]: E0121 23:37:39.367754 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:37:39.368145 kubelet[3515]: E0121 23:37:39.367819 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:37:39.368426 kubelet[3515]: E0121 23:37:39.368220 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrr6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84667c98fc-fz867_calico-apiserver(20617c9b-94b4-4cb3-a6b6-dc13407eb549): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 23:37:39.369895 kubelet[3515]: E0121 23:37:39.369669 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84667c98fc-fz867" podUID="20617c9b-94b4-4cb3-a6b6-dc13407eb549" Jan 21 23:37:39.370170 containerd[2003]: time="2026-01-21T23:37:39.369193350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 21 23:37:39.444094 systemd-networkd[1597]: calia4a7e5f4012: Gained IPv6LL Jan 21 23:37:39.506180 systemd-networkd[1597]: calia6668c10e78: Gained IPv6LL Jan 21 23:37:39.591463 kubelet[3515]: E0121 23:37:39.591389 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84667c98fc-fz867" podUID="20617c9b-94b4-4cb3-a6b6-dc13407eb549" Jan 21 23:37:39.599304 kubelet[3515]: E0121 23:37:39.599195 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c7587b9f-qxd5v" podUID="ca6e1be7-3778-4e58-b701-59e16c774819" Jan 21 23:37:39.652821 containerd[2003]: time="2026-01-21T23:37:39.652753100Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:37:39.655144 containerd[2003]: time="2026-01-21T23:37:39.655061477Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 21 23:37:39.655309 containerd[2003]: time="2026-01-21T23:37:39.655203498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 21 23:37:39.656034 kubelet[3515]: E0121 23:37:39.655936 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 23:37:39.656178 kubelet[3515]: E0121 23:37:39.656038 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 23:37:39.656409 kubelet[3515]: E0121 23:37:39.656327 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxswv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-spv6f_calico-system(9c98ebba-3094-4d44-b58e-8378134e1be8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 21 23:37:39.657375 containerd[2003]: time="2026-01-21T23:37:39.657036338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 23:37:39.659621 kubelet[3515]: E0121 23:37:39.659475 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-spv6f" podUID="9c98ebba-3094-4d44-b58e-8378134e1be8" Jan 21 23:37:39.717000 audit[5673]: NETFILTER_CFG table=filter:145 family=2 entries=14 op=nft_register_rule pid=5673 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:39.717000 audit[5673]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdf947350 a2=0 a3=1 items=0 ppid=3660 pid=5673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:39.717000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:39.729000 audit[5673]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=5673 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:39.729000 audit[5673]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffdf947350 a2=0 a3=1 items=0 ppid=3660 pid=5673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:39.729000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:39.922778 containerd[2003]: time="2026-01-21T23:37:39.922702563Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:37:39.925102 containerd[2003]: time="2026-01-21T23:37:39.924926358Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 23:37:39.925266 containerd[2003]: time="2026-01-21T23:37:39.925045555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 23:37:39.925583 kubelet[3515]: E0121 23:37:39.925495 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:37:39.927469 kubelet[3515]: E0121 23:37:39.925585 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:37:39.927469 kubelet[3515]: E0121 23:37:39.925787 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8bfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5468c6d76d-ffj6z_calico-apiserver(920c851c-0448-41e7-8aac-ea1379198aa5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 23:37:39.927469 kubelet[3515]: E0121 23:37:39.927201 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5468c6d76d-ffj6z" podUID="920c851c-0448-41e7-8aac-ea1379198aa5" Jan 21 23:37:40.602294 kubelet[3515]: E0121 23:37:40.601835 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5468c6d76d-ffj6z" podUID="920c851c-0448-41e7-8aac-ea1379198aa5" Jan 21 23:37:40.602815 kubelet[3515]: E0121 23:37:40.602752 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84667c98fc-fz867" podUID="20617c9b-94b4-4cb3-a6b6-dc13407eb549" Jan 21 23:37:40.603512 kubelet[3515]: E0121 23:37:40.603440 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-spv6f" podUID="9c98ebba-3094-4d44-b58e-8378134e1be8" Jan 21 23:37:40.660020 systemd-networkd[1597]: cali6aebd44ffd7: Gained IPv6LL Jan 21 23:37:40.720000 audit[5675]: NETFILTER_CFG table=filter:147 family=2 entries=14 op=nft_register_rule pid=5675 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:40.722840 kernel: kauditd_printk_skb: 267 callbacks suppressed Jan 21 23:37:40.723342 kernel: audit: type=1325 audit(1769038660.720:772): table=filter:147 family=2 entries=14 op=nft_register_rule pid=5675 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:40.720000 audit[5675]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc6313740 a2=0 a3=1 items=0 ppid=3660 pid=5675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:40.733997 kernel: audit: type=1300 audit(1769038660.720:772): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc6313740 a2=0 a3=1 items=0 ppid=3660 pid=5675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:40.720000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:40.737851 kernel: audit: type=1327 audit(1769038660.720:772): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:40.739000 audit[5675]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=5675 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:40.739000 audit[5675]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc6313740 a2=0 a3=1 items=0 ppid=3660 pid=5675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:40.750612 kernel: audit: type=1325 audit(1769038660.739:773): table=nat:148 family=2 entries=20 op=nft_register_rule pid=5675 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:37:40.750890 kernel: audit: type=1300 audit(1769038660.739:773): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc6313740 a2=0 a3=1 items=0 ppid=3660 pid=5675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:40.750938 kernel: audit: type=1327 audit(1769038660.739:773): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:40.739000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:37:42.974765 ntpd[1961]: Listen normally on 6 vxlan.calico 192.168.73.128:123 Jan 21 23:37:42.974855 ntpd[1961]: Listen normally on 7 vxlan.calico [fe80::6494:57ff:fe54:5def%4]:123 Jan 21 23:37:42.975431 ntpd[1961]: 21 Jan 23:37:42 ntpd[1961]: Listen normally on 6 vxlan.calico 192.168.73.128:123 Jan 21 23:37:42.975431 ntpd[1961]: 21 Jan 23:37:42 ntpd[1961]: Listen normally on 7 vxlan.calico [fe80::6494:57ff:fe54:5def%4]:123 Jan 21 23:37:42.975431 ntpd[1961]: 21 Jan 23:37:42 ntpd[1961]: Listen normally on 8 cali81664905efe [fe80::ecee:eeff:feee:eeee%7]:123 Jan 21 23:37:42.975431 ntpd[1961]: 21 Jan 23:37:42 ntpd[1961]: Listen normally on 9 cali2473b56ca62 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 21 23:37:42.975431 ntpd[1961]: 21 Jan 23:37:42 ntpd[1961]: Listen normally on 10 cali0dc6d58d960 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 21 23:37:42.975431 ntpd[1961]: 21 Jan 23:37:42 ntpd[1961]: Listen normally on 11 cali86f5d6cbaa8 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 21 23:37:42.975431 ntpd[1961]: 21 Jan 23:37:42 ntpd[1961]: Listen normally on 12 cali7824622c8ff [fe80::ecee:eeff:feee:eeee%11]:123 Jan 21 23:37:42.975431 ntpd[1961]: 21 Jan 23:37:42 ntpd[1961]: Listen normally on 13 calia4a7e5f4012 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 21 23:37:42.975431 ntpd[1961]: 21 Jan 23:37:42 ntpd[1961]: Listen normally on 14 cali76c8e6cd4cd [fe80::ecee:eeff:feee:eeee%13]:123 Jan 21 23:37:42.975431 ntpd[1961]: 21 Jan 23:37:42 ntpd[1961]: Listen normally on 15 calia6668c10e78 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 21 23:37:42.975431 ntpd[1961]: 21 Jan 23:37:42 ntpd[1961]: Listen normally on 16 cali6aebd44ffd7 [fe80::ecee:eeff:feee:eeee%15]:123 Jan 21 23:37:42.974905 ntpd[1961]: Listen normally on 8 cali81664905efe [fe80::ecee:eeff:feee:eeee%7]:123 Jan 21 23:37:42.974952 ntpd[1961]: Listen normally on 9 cali2473b56ca62 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 21 23:37:42.975054 ntpd[1961]: Listen normally on 10 cali0dc6d58d960 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 21 23:37:42.975105 ntpd[1961]: Listen normally on 11 cali86f5d6cbaa8 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 21 23:37:42.975151 ntpd[1961]: Listen normally on 12 cali7824622c8ff [fe80::ecee:eeff:feee:eeee%11]:123 Jan 21 23:37:42.975196 ntpd[1961]: Listen normally on 13 calia4a7e5f4012 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 21 23:37:42.975240 ntpd[1961]: Listen normally on 14 cali76c8e6cd4cd [fe80::ecee:eeff:feee:eeee%13]:123 Jan 21 23:37:42.975287 ntpd[1961]: Listen normally on 15 calia6668c10e78 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 21 23:37:42.975336 ntpd[1961]: Listen normally on 16 cali6aebd44ffd7 [fe80::ecee:eeff:feee:eeee%15]:123 Jan 21 23:37:45.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.29.34:22-68.220.241.50:60950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:37:45.425440 systemd[1]: Started sshd@7-172.31.29.34:22-68.220.241.50:60950.service - OpenSSH per-connection server daemon (68.220.241.50:60950). Jan 21 23:37:45.433061 kernel: audit: type=1130 audit(1769038665.424:774): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.29.34:22-68.220.241.50:60950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:37:45.947000 audit[5689]: USER_ACCT pid=5689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:45.949741 sshd[5689]: Accepted publickey for core from 68.220.241.50 port 60950 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:37:45.955401 kernel: audit: type=1101 audit(1769038665.947:775): pid=5689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:45.954000 audit[5689]: CRED_ACQ pid=5689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:45.958545 sshd-session[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:37:45.965281 kernel: audit: type=1103 audit(1769038665.954:776): pid=5689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:45.965391 kernel: audit: type=1006 audit(1769038665.954:777): pid=5689 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jan 21 23:37:45.965436 kernel: audit: type=1300 audit(1769038665.954:777): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc00bf0e0 a2=3 a3=0 items=0 ppid=1 pid=5689 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:45.954000 audit[5689]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc00bf0e0 a2=3 a3=0 items=0 ppid=1 pid=5689 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:45.974217 kernel: audit: type=1327 audit(1769038665.954:777): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:37:45.954000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:37:45.974530 systemd-logind[1971]: New session 8 of user core. Jan 21 23:37:45.980329 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 21 23:37:45.986000 audit[5689]: USER_START pid=5689 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:45.994000 audit[5692]: CRED_ACQ pid=5692 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:46.001004 kernel: audit: type=1105 audit(1769038665.986:778): pid=5689 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:46.001079 kernel: audit: type=1103 audit(1769038665.994:779): pid=5692 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:46.335909 sshd[5692]: Connection closed by 68.220.241.50 port 60950 Jan 21 23:37:46.337332 sshd-session[5689]: pam_unix(sshd:session): session closed for user core Jan 21 23:37:46.339000 audit[5689]: USER_END pid=5689 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:46.345522 systemd[1]: sshd@7-172.31.29.34:22-68.220.241.50:60950.service: Deactivated successfully. Jan 21 23:37:46.339000 audit[5689]: CRED_DISP pid=5689 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:46.351886 systemd[1]: session-8.scope: Deactivated successfully. Jan 21 23:37:46.354465 kernel: audit: type=1106 audit(1769038666.339:780): pid=5689 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:46.354929 kernel: audit: type=1104 audit(1769038666.339:781): pid=5689 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:46.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.29.34:22-68.220.241.50:60950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:37:46.362333 kernel: audit: type=1131 audit(1769038666.341:782): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.29.34:22-68.220.241.50:60950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:37:46.364165 systemd-logind[1971]: Session 8 logged out. Waiting for processes to exit. Jan 21 23:37:46.369384 systemd-logind[1971]: Removed session 8. Jan 21 23:37:48.104028 containerd[2003]: time="2026-01-21T23:37:48.103002647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 23:37:48.354654 containerd[2003]: time="2026-01-21T23:37:48.354291293Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:37:48.357826 containerd[2003]: time="2026-01-21T23:37:48.357684395Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 23:37:48.358133 containerd[2003]: time="2026-01-21T23:37:48.357736101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 23:37:48.358330 kubelet[3515]: E0121 23:37:48.358245 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 23:37:48.358884 kubelet[3515]: E0121 23:37:48.358341 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 23:37:48.358884 kubelet[3515]: E0121 23:37:48.358739 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdxpq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jq2tn_calico-system(c9be5f6a-b238-4709-b078-d405c449b532): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 23:37:48.362005 kubelet[3515]: E0121 23:37:48.361245 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jq2tn" podUID="c9be5f6a-b238-4709-b078-d405c449b532" Jan 21 23:37:49.101167 containerd[2003]: time="2026-01-21T23:37:49.100232434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 23:37:49.368429 containerd[2003]: time="2026-01-21T23:37:49.368113579Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:37:49.370903 containerd[2003]: time="2026-01-21T23:37:49.370756457Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 23:37:49.371217 containerd[2003]: time="2026-01-21T23:37:49.370875989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 23:37:49.371297 kubelet[3515]: E0121 23:37:49.371157 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 23:37:49.371297 kubelet[3515]: E0121 23:37:49.371216 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 23:37:49.371797 kubelet[3515]: E0121 23:37:49.371369 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:24f3b654eaca42c1be474f4c2fb54f82,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bbxdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64b9b9d79c-64kd2_calico-system(b82636a0-c786-4a73-90fe-e05d2e69a656): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 23:37:49.375000 containerd[2003]: time="2026-01-21T23:37:49.374922117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 23:37:49.674169 containerd[2003]: time="2026-01-21T23:37:49.670627855Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:37:49.674169 containerd[2003]: time="2026-01-21T23:37:49.673482607Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 23:37:49.674169 containerd[2003]: time="2026-01-21T23:37:49.673545779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 23:37:49.674387 kubelet[3515]: E0121 23:37:49.673764 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 23:37:49.674387 kubelet[3515]: E0121 23:37:49.673839 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 23:37:49.675264 kubelet[3515]: E0121 23:37:49.674067 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbxdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64b9b9d79c-64kd2_calico-system(b82636a0-c786-4a73-90fe-e05d2e69a656): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 23:37:49.677092 kubelet[3515]: E0121 23:37:49.676826 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64b9b9d79c-64kd2" podUID="b82636a0-c786-4a73-90fe-e05d2e69a656" Jan 21 23:37:50.101570 containerd[2003]: time="2026-01-21T23:37:50.101480582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 21 23:37:50.386568 containerd[2003]: time="2026-01-21T23:37:50.386346035Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:37:50.388897 containerd[2003]: time="2026-01-21T23:37:50.388807180Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 21 23:37:50.389087 containerd[2003]: time="2026-01-21T23:37:50.388954622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 21 23:37:50.389383 kubelet[3515]: E0121 23:37:50.389318 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 23:37:50.392172 kubelet[3515]: E0121 23:37:50.389400 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 23:37:50.392172 kubelet[3515]: E0121 23:37:50.391808 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bkmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8c7587b9f-qxd5v_calico-system(ca6e1be7-3778-4e58-b701-59e16c774819): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 21 23:37:50.392829 containerd[2003]: time="2026-01-21T23:37:50.389886280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 23:37:50.393601 kubelet[3515]: E0121 23:37:50.393537 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c7587b9f-qxd5v" podUID="ca6e1be7-3778-4e58-b701-59e16c774819" Jan 21 23:37:50.681895 containerd[2003]: time="2026-01-21T23:37:50.681071036Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:37:50.683459 containerd[2003]: time="2026-01-21T23:37:50.683305602Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 23:37:50.683459 containerd[2003]: time="2026-01-21T23:37:50.683374363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 23:37:50.683819 kubelet[3515]: E0121 23:37:50.683741 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:37:50.683900 kubelet[3515]: E0121 23:37:50.683837 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:37:50.685395 kubelet[3515]: E0121 23:37:50.684709 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8d8fl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84667c98fc-g2rqz_calico-apiserver(de79900c-2c4b-48e5-9995-14ed014509c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 23:37:50.686414 kubelet[3515]: E0121 23:37:50.686300 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84667c98fc-g2rqz" podUID="de79900c-2c4b-48e5-9995-14ed014509c5" Jan 21 23:37:51.100260 containerd[2003]: time="2026-01-21T23:37:51.100106758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 23:37:51.350524 containerd[2003]: time="2026-01-21T23:37:51.350346456Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:37:51.352952 containerd[2003]: time="2026-01-21T23:37:51.352871493Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 23:37:51.353115 containerd[2003]: time="2026-01-21T23:37:51.353039865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 23:37:51.353568 kubelet[3515]: E0121 23:37:51.353326 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:37:51.353568 kubelet[3515]: E0121 23:37:51.353419 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:37:51.355159 kubelet[3515]: E0121 23:37:51.354914 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8bfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5468c6d76d-ffj6z_calico-apiserver(920c851c-0448-41e7-8aac-ea1379198aa5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 23:37:51.356369 kubelet[3515]: E0121 23:37:51.356285 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5468c6d76d-ffj6z" podUID="920c851c-0448-41e7-8aac-ea1379198aa5" Jan 21 23:37:51.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.29.34:22-68.220.241.50:60964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:37:51.433311 systemd[1]: Started sshd@8-172.31.29.34:22-68.220.241.50:60964.service - OpenSSH per-connection server daemon (68.220.241.50:60964). Jan 21 23:37:51.442024 kernel: audit: type=1130 audit(1769038671.432:783): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.29.34:22-68.220.241.50:60964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:37:51.888000 audit[5708]: USER_ACCT pid=5708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:51.890321 sshd[5708]: Accepted publickey for core from 68.220.241.50 port 60964 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:37:51.899028 kernel: audit: type=1101 audit(1769038671.888:784): pid=5708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:51.899162 kernel: audit: type=1103 audit(1769038671.896:785): pid=5708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:51.896000 audit[5708]: CRED_ACQ pid=5708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:51.900045 sshd-session[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:37:51.908546 kernel: audit: type=1006 audit(1769038671.896:786): pid=5708 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 21 23:37:51.896000 audit[5708]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffda01430 a2=3 a3=0 items=0 ppid=1 pid=5708 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:51.914799 kernel: audit: type=1300 audit(1769038671.896:786): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffda01430 a2=3 a3=0 items=0 ppid=1 pid=5708 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:51.896000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:37:51.919328 kernel: audit: type=1327 audit(1769038671.896:786): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:37:51.921595 systemd-logind[1971]: New session 9 of user core. Jan 21 23:37:51.932389 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 21 23:37:51.938000 audit[5708]: USER_START pid=5708 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:51.947154 kernel: audit: type=1105 audit(1769038671.938:787): pid=5708 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:51.947000 audit[5711]: CRED_ACQ pid=5711 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:51.954035 kernel: audit: type=1103 audit(1769038671.947:788): pid=5711 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:52.291559 sshd[5711]: Connection closed by 68.220.241.50 port 60964 Jan 21 23:37:52.292408 sshd-session[5708]: pam_unix(sshd:session): session closed for user core Jan 21 23:37:52.295000 audit[5708]: USER_END pid=5708 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:52.302924 systemd[1]: sshd@8-172.31.29.34:22-68.220.241.50:60964.service: Deactivated successfully. Jan 21 23:37:52.296000 audit[5708]: CRED_DISP pid=5708 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:52.308952 kernel: audit: type=1106 audit(1769038672.295:789): pid=5708 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:52.309138 kernel: audit: type=1104 audit(1769038672.296:790): pid=5708 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:52.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.29.34:22-68.220.241.50:60964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:37:52.316922 systemd[1]: session-9.scope: Deactivated successfully. Jan 21 23:37:52.326522 systemd-logind[1971]: Session 9 logged out. Waiting for processes to exit. Jan 21 23:37:52.332361 systemd-logind[1971]: Removed session 9. Jan 21 23:37:53.102417 containerd[2003]: time="2026-01-21T23:37:53.102354791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 23:37:53.364025 containerd[2003]: time="2026-01-21T23:37:53.363798145Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:37:53.366891 containerd[2003]: time="2026-01-21T23:37:53.366477857Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 23:37:53.367613 containerd[2003]: time="2026-01-21T23:37:53.366558648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 23:37:53.367710 kubelet[3515]: E0121 23:37:53.367197 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:37:53.367710 kubelet[3515]: E0121 23:37:53.367261 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:37:53.367710 kubelet[3515]: E0121 23:37:53.367447 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrr6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84667c98fc-fz867_calico-apiserver(20617c9b-94b4-4cb3-a6b6-dc13407eb549): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 23:37:53.368869 kubelet[3515]: E0121 23:37:53.368796 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84667c98fc-fz867" podUID="20617c9b-94b4-4cb3-a6b6-dc13407eb549" Jan 21 23:37:56.102645 containerd[2003]: time="2026-01-21T23:37:56.102577499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 21 23:37:56.394111 containerd[2003]: time="2026-01-21T23:37:56.393930974Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:37:56.396543 containerd[2003]: time="2026-01-21T23:37:56.396473738Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 21 23:37:56.396676 containerd[2003]: time="2026-01-21T23:37:56.396602937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 21 23:37:56.396903 kubelet[3515]: E0121 23:37:56.396845 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 23:37:56.397881 kubelet[3515]: E0121 23:37:56.396917 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 23:37:56.397881 kubelet[3515]: E0121 23:37:56.397125 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxswv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-spv6f_calico-system(9c98ebba-3094-4d44-b58e-8378134e1be8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 21 23:37:56.401678 containerd[2003]: time="2026-01-21T23:37:56.401344213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 21 23:37:56.667497 containerd[2003]: time="2026-01-21T23:37:56.667334265Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:37:56.670222 containerd[2003]: time="2026-01-21T23:37:56.670112603Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 21 23:37:56.670355 containerd[2003]: time="2026-01-21T23:37:56.670180933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 21 23:37:56.670702 kubelet[3515]: E0121 23:37:56.670653 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 23:37:56.670901 kubelet[3515]: E0121 23:37:56.670863 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 23:37:56.671278 kubelet[3515]: E0121 23:37:56.671188 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxswv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-spv6f_calico-system(9c98ebba-3094-4d44-b58e-8378134e1be8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 21 23:37:56.672901 kubelet[3515]: E0121 23:37:56.672823 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-spv6f" podUID="9c98ebba-3094-4d44-b58e-8378134e1be8" Jan 21 23:37:57.382804 systemd[1]: Started sshd@9-172.31.29.34:22-68.220.241.50:40630.service - OpenSSH per-connection server daemon (68.220.241.50:40630). Jan 21 23:37:57.385012 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 23:37:57.385065 kernel: audit: type=1130 audit(1769038677.382:792): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.29.34:22-68.220.241.50:40630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:37:57.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.29.34:22-68.220.241.50:40630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:37:57.843000 audit[5731]: USER_ACCT pid=5731 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:57.844336 sshd[5731]: Accepted publickey for core from 68.220.241.50 port 40630 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:37:57.851006 kernel: audit: type=1101 audit(1769038677.843:793): pid=5731 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:57.851000 audit[5731]: CRED_ACQ pid=5731 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:57.853956 sshd-session[5731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:37:57.862674 kernel: audit: type=1103 audit(1769038677.851:794): pid=5731 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:57.862767 kernel: audit: type=1006 audit(1769038677.852:795): pid=5731 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 21 23:37:57.852000 audit[5731]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe50704f0 a2=3 a3=0 items=0 ppid=1 pid=5731 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:57.869117 kernel: audit: type=1300 audit(1769038677.852:795): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe50704f0 a2=3 a3=0 items=0 ppid=1 pid=5731 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:57.852000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:37:57.872044 kernel: audit: type=1327 audit(1769038677.852:795): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:37:57.879821 systemd-logind[1971]: New session 10 of user core. Jan 21 23:37:57.889319 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 21 23:37:57.894000 audit[5731]: USER_START pid=5731 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:57.903077 kernel: audit: type=1105 audit(1769038677.894:796): pid=5731 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:57.903000 audit[5734]: CRED_ACQ pid=5734 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:57.910020 kernel: audit: type=1103 audit(1769038677.903:797): pid=5734 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:58.209138 sshd[5734]: Connection closed by 68.220.241.50 port 40630 Jan 21 23:37:58.210494 sshd-session[5731]: pam_unix(sshd:session): session closed for user core Jan 21 23:37:58.212000 audit[5731]: USER_END pid=5731 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:58.222730 systemd[1]: sshd@9-172.31.29.34:22-68.220.241.50:40630.service: Deactivated successfully. Jan 21 23:37:58.229883 kernel: audit: type=1106 audit(1769038678.212:798): pid=5731 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:58.229967 kernel: audit: type=1104 audit(1769038678.213:799): pid=5731 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:58.213000 audit[5731]: CRED_DISP pid=5731 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:58.227652 systemd[1]: session-10.scope: Deactivated successfully. Jan 21 23:37:58.230099 systemd-logind[1971]: Session 10 logged out. Waiting for processes to exit. Jan 21 23:37:58.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.29.34:22-68.220.241.50:40630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:37:58.235720 systemd-logind[1971]: Removed session 10. Jan 21 23:37:58.302097 systemd[1]: Started sshd@10-172.31.29.34:22-68.220.241.50:40632.service - OpenSSH per-connection server daemon (68.220.241.50:40632). Jan 21 23:37:58.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.29.34:22-68.220.241.50:40632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:37:58.762000 audit[5747]: USER_ACCT pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:58.763900 sshd[5747]: Accepted publickey for core from 68.220.241.50 port 40632 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:37:58.764000 audit[5747]: CRED_ACQ pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:58.764000 audit[5747]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe04b7af0 a2=3 a3=0 items=0 ppid=1 pid=5747 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:58.764000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:37:58.767091 sshd-session[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:37:58.778404 systemd-logind[1971]: New session 11 of user core. Jan 21 23:37:58.783349 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 21 23:37:58.788000 audit[5747]: USER_START pid=5747 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:58.793000 audit[5750]: CRED_ACQ pid=5750 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:59.202652 sshd[5750]: Connection closed by 68.220.241.50 port 40632 Jan 21 23:37:59.203322 sshd-session[5747]: pam_unix(sshd:session): session closed for user core Jan 21 23:37:59.205000 audit[5747]: USER_END pid=5747 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:59.205000 audit[5747]: CRED_DISP pid=5747 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:59.210791 systemd-logind[1971]: Session 11 logged out. Waiting for processes to exit. Jan 21 23:37:59.211874 systemd[1]: sshd@10-172.31.29.34:22-68.220.241.50:40632.service: Deactivated successfully. Jan 21 23:37:59.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.29.34:22-68.220.241.50:40632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:37:59.216580 systemd[1]: session-11.scope: Deactivated successfully. Jan 21 23:37:59.224422 systemd-logind[1971]: Removed session 11. Jan 21 23:37:59.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.29.34:22-68.220.241.50:40646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:37:59.308881 systemd[1]: Started sshd@11-172.31.29.34:22-68.220.241.50:40646.service - OpenSSH per-connection server daemon (68.220.241.50:40646). Jan 21 23:37:59.801000 audit[5763]: USER_ACCT pid=5763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:59.802773 sshd[5763]: Accepted publickey for core from 68.220.241.50 port 40646 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:37:59.804000 audit[5763]: CRED_ACQ pid=5763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:59.805000 audit[5763]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffecd47050 a2=3 a3=0 items=0 ppid=1 pid=5763 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:37:59.805000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:37:59.806864 sshd-session[5763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:37:59.822836 systemd-logind[1971]: New session 12 of user core. Jan 21 23:37:59.832333 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 21 23:37:59.840000 audit[5763]: USER_START pid=5763 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:37:59.845000 audit[5766]: CRED_ACQ pid=5766 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:00.192326 sshd[5766]: Connection closed by 68.220.241.50 port 40646 Jan 21 23:38:00.192107 sshd-session[5763]: pam_unix(sshd:session): session closed for user core Jan 21 23:38:00.195000 audit[5763]: USER_END pid=5763 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:00.196000 audit[5763]: CRED_DISP pid=5763 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:00.202376 systemd[1]: sshd@11-172.31.29.34:22-68.220.241.50:40646.service: Deactivated successfully. Jan 21 23:38:00.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.29.34:22-68.220.241.50:40646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:00.208274 systemd[1]: session-12.scope: Deactivated successfully. Jan 21 23:38:00.210060 systemd-logind[1971]: Session 12 logged out. Waiting for processes to exit. Jan 21 23:38:00.214448 systemd-logind[1971]: Removed session 12. Jan 21 23:38:01.102281 kubelet[3515]: E0121 23:38:01.102206 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jq2tn" podUID="c9be5f6a-b238-4709-b078-d405c449b532" Jan 21 23:38:02.120356 kubelet[3515]: E0121 23:38:02.119868 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64b9b9d79c-64kd2" podUID="b82636a0-c786-4a73-90fe-e05d2e69a656" Jan 21 23:38:04.102090 kubelet[3515]: E0121 23:38:04.102034 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c7587b9f-qxd5v" podUID="ca6e1be7-3778-4e58-b701-59e16c774819" Jan 21 23:38:05.101075 kubelet[3515]: E0121 23:38:05.100655 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5468c6d76d-ffj6z" podUID="920c851c-0448-41e7-8aac-ea1379198aa5" Jan 21 23:38:05.101075 kubelet[3515]: E0121 23:38:05.100919 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84667c98fc-g2rqz" podUID="de79900c-2c4b-48e5-9995-14ed014509c5" Jan 21 23:38:05.286812 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 21 23:38:05.287165 kernel: audit: type=1130 audit(1769038685.282:819): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.29.34:22-68.220.241.50:49166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:05.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.29.34:22-68.220.241.50:49166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:05.283250 systemd[1]: Started sshd@12-172.31.29.34:22-68.220.241.50:49166.service - OpenSSH per-connection server daemon (68.220.241.50:49166). Jan 21 23:38:05.780000 audit[5806]: USER_ACCT pid=5806 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:05.783258 sshd[5806]: Accepted publickey for core from 68.220.241.50 port 49166 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:38:05.788173 kernel: audit: type=1101 audit(1769038685.780:820): pid=5806 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:05.788000 audit[5806]: CRED_ACQ pid=5806 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:05.798533 kernel: audit: type=1103 audit(1769038685.788:821): pid=5806 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:05.798674 kernel: audit: type=1006 audit(1769038685.788:822): pid=5806 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 21 23:38:05.798719 kernel: audit: type=1300 audit(1769038685.788:822): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffea777080 a2=3 a3=0 items=0 ppid=1 pid=5806 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:05.788000 audit[5806]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffea777080 a2=3 a3=0 items=0 ppid=1 pid=5806 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:05.788000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:38:05.807204 kernel: audit: type=1327 audit(1769038685.788:822): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:38:05.821795 sshd-session[5806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:38:05.831152 systemd-logind[1971]: New session 13 of user core. Jan 21 23:38:05.842341 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 21 23:38:05.847000 audit[5806]: USER_START pid=5806 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:05.856143 kernel: audit: type=1105 audit(1769038685.847:823): pid=5806 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:05.857195 kernel: audit: type=1103 audit(1769038685.855:824): pid=5809 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:05.855000 audit[5809]: CRED_ACQ pid=5809 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:06.168070 sshd[5809]: Connection closed by 68.220.241.50 port 49166 Jan 21 23:38:06.169029 sshd-session[5806]: pam_unix(sshd:session): session closed for user core Jan 21 23:38:06.171000 audit[5806]: USER_END pid=5806 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:06.171000 audit[5806]: CRED_DISP pid=5806 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:06.178306 systemd[1]: sshd@12-172.31.29.34:22-68.220.241.50:49166.service: Deactivated successfully. Jan 21 23:38:06.178380 systemd-logind[1971]: Session 13 logged out. Waiting for processes to exit. Jan 21 23:38:06.185279 kernel: audit: type=1106 audit(1769038686.171:825): pid=5806 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:06.185444 kernel: audit: type=1104 audit(1769038686.171:826): pid=5806 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:06.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.29.34:22-68.220.241.50:49166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:06.186636 systemd[1]: session-13.scope: Deactivated successfully. Jan 21 23:38:06.194553 systemd-logind[1971]: Removed session 13. Jan 21 23:38:08.101862 kubelet[3515]: E0121 23:38:08.101790 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84667c98fc-fz867" podUID="20617c9b-94b4-4cb3-a6b6-dc13407eb549" Jan 21 23:38:11.256610 systemd[1]: Started sshd@13-172.31.29.34:22-68.220.241.50:49178.service - OpenSSH per-connection server daemon (68.220.241.50:49178). Jan 21 23:38:11.263596 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 23:38:11.263693 kernel: audit: type=1130 audit(1769038691.255:828): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.29.34:22-68.220.241.50:49178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:11.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.29.34:22-68.220.241.50:49178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:11.725000 audit[5827]: USER_ACCT pid=5827 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:11.733297 sshd[5827]: Accepted publickey for core from 68.220.241.50 port 49178 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:38:11.733000 audit[5827]: CRED_ACQ pid=5827 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:11.735308 sshd-session[5827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:38:11.739742 kernel: audit: type=1101 audit(1769038691.725:829): pid=5827 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:11.739840 kernel: audit: type=1103 audit(1769038691.733:830): pid=5827 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:11.744378 kernel: audit: type=1006 audit(1769038691.733:831): pid=5827 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 21 23:38:11.733000 audit[5827]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd666cff0 a2=3 a3=0 items=0 ppid=1 pid=5827 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:11.751170 kernel: audit: type=1300 audit(1769038691.733:831): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd666cff0 a2=3 a3=0 items=0 ppid=1 pid=5827 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:11.751305 kernel: audit: type=1327 audit(1769038691.733:831): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:38:11.733000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:38:11.762185 systemd-logind[1971]: New session 14 of user core. Jan 21 23:38:11.770346 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 21 23:38:11.776000 audit[5827]: USER_START pid=5827 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:11.780000 audit[5830]: CRED_ACQ pid=5830 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:11.790418 kernel: audit: type=1105 audit(1769038691.776:832): pid=5827 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:11.790790 kernel: audit: type=1103 audit(1769038691.780:833): pid=5830 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:12.100735 sshd[5830]: Connection closed by 68.220.241.50 port 49178 Jan 21 23:38:12.103549 kubelet[3515]: E0121 23:38:12.103479 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-spv6f" podUID="9c98ebba-3094-4d44-b58e-8378134e1be8" Jan 21 23:38:12.105582 sshd-session[5827]: pam_unix(sshd:session): session closed for user core Jan 21 23:38:12.110000 audit[5827]: USER_END pid=5827 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:12.119135 systemd[1]: sshd@13-172.31.29.34:22-68.220.241.50:49178.service: Deactivated successfully. Jan 21 23:38:12.126232 systemd[1]: session-14.scope: Deactivated successfully. Jan 21 23:38:12.111000 audit[5827]: CRED_DISP pid=5827 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:12.135228 kernel: audit: type=1106 audit(1769038692.110:834): pid=5827 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:12.135353 kernel: audit: type=1104 audit(1769038692.111:835): pid=5827 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:12.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.29.34:22-68.220.241.50:49178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:12.135896 systemd-logind[1971]: Session 14 logged out. Waiting for processes to exit. Jan 21 23:38:12.140713 systemd-logind[1971]: Removed session 14. Jan 21 23:38:13.103038 containerd[2003]: time="2026-01-21T23:38:13.102680633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 23:38:13.396492 containerd[2003]: time="2026-01-21T23:38:13.396189131Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:38:13.398578 containerd[2003]: time="2026-01-21T23:38:13.398469130Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 23:38:13.398708 containerd[2003]: time="2026-01-21T23:38:13.398595583Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 23:38:13.399208 kubelet[3515]: E0121 23:38:13.399061 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 23:38:13.399208 kubelet[3515]: E0121 23:38:13.399164 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 23:38:13.401102 kubelet[3515]: E0121 23:38:13.399969 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:24f3b654eaca42c1be474f4c2fb54f82,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bbxdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64b9b9d79c-64kd2_calico-system(b82636a0-c786-4a73-90fe-e05d2e69a656): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 23:38:13.403358 containerd[2003]: time="2026-01-21T23:38:13.403303420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 23:38:13.698058 containerd[2003]: time="2026-01-21T23:38:13.697725224Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:38:13.700901 containerd[2003]: time="2026-01-21T23:38:13.699964144Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 23:38:13.701107 containerd[2003]: time="2026-01-21T23:38:13.700125847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 23:38:13.701398 kubelet[3515]: E0121 23:38:13.701337 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 23:38:13.701509 kubelet[3515]: E0121 23:38:13.701407 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 23:38:13.701649 kubelet[3515]: E0121 23:38:13.701565 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbxdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64b9b9d79c-64kd2_calico-system(b82636a0-c786-4a73-90fe-e05d2e69a656): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 23:38:13.703325 kubelet[3515]: E0121 23:38:13.703241 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64b9b9d79c-64kd2" podUID="b82636a0-c786-4a73-90fe-e05d2e69a656" Jan 21 23:38:16.110007 containerd[2003]: time="2026-01-21T23:38:16.109280107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 23:38:16.423740 containerd[2003]: time="2026-01-21T23:38:16.423521696Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:38:16.426022 containerd[2003]: time="2026-01-21T23:38:16.425892166Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 23:38:16.426209 containerd[2003]: time="2026-01-21T23:38:16.426079920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 23:38:16.427251 kubelet[3515]: E0121 23:38:16.427188 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 23:38:16.428203 kubelet[3515]: E0121 23:38:16.427911 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 23:38:16.431175 kubelet[3515]: E0121 23:38:16.428274 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdxpq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jq2tn_calico-system(c9be5f6a-b238-4709-b078-d405c449b532): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 23:38:16.431175 kubelet[3515]: E0121 23:38:16.430037 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jq2tn" podUID="c9be5f6a-b238-4709-b078-d405c449b532" Jan 21 23:38:16.431438 containerd[2003]: time="2026-01-21T23:38:16.428955938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 21 23:38:16.682463 containerd[2003]: time="2026-01-21T23:38:16.682297416Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:38:16.685350 containerd[2003]: time="2026-01-21T23:38:16.685228330Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 21 23:38:16.685527 containerd[2003]: time="2026-01-21T23:38:16.685295509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 21 23:38:16.686523 kubelet[3515]: E0121 23:38:16.686266 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 23:38:16.686523 kubelet[3515]: E0121 23:38:16.686392 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 23:38:16.687310 kubelet[3515]: E0121 23:38:16.687188 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bkmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8c7587b9f-qxd5v_calico-system(ca6e1be7-3778-4e58-b701-59e16c774819): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 21 23:38:16.689261 kubelet[3515]: E0121 23:38:16.688637 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c7587b9f-qxd5v" podUID="ca6e1be7-3778-4e58-b701-59e16c774819" Jan 21 23:38:16.689380 containerd[2003]: time="2026-01-21T23:38:16.688186986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 23:38:16.984670 containerd[2003]: time="2026-01-21T23:38:16.984495878Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:38:16.987329 containerd[2003]: time="2026-01-21T23:38:16.987237779Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 23:38:16.987329 containerd[2003]: time="2026-01-21T23:38:16.987282432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 23:38:16.987917 kubelet[3515]: E0121 23:38:16.987871 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:38:16.988151 kubelet[3515]: E0121 23:38:16.988028 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:38:16.988585 kubelet[3515]: E0121 23:38:16.988453 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8d8fl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84667c98fc-g2rqz_calico-apiserver(de79900c-2c4b-48e5-9995-14ed014509c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 23:38:16.990038 kubelet[3515]: E0121 23:38:16.989949 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84667c98fc-g2rqz" podUID="de79900c-2c4b-48e5-9995-14ed014509c5" Jan 21 23:38:17.101374 containerd[2003]: time="2026-01-21T23:38:17.101126266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 23:38:17.205273 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 23:38:17.206241 kernel: audit: type=1130 audit(1769038697.199:837): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.29.34:22-68.220.241.50:53294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:17.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.29.34:22-68.220.241.50:53294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:17.200537 systemd[1]: Started sshd@14-172.31.29.34:22-68.220.241.50:53294.service - OpenSSH per-connection server daemon (68.220.241.50:53294). Jan 21 23:38:17.367255 containerd[2003]: time="2026-01-21T23:38:17.366411742Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:38:17.370368 containerd[2003]: time="2026-01-21T23:38:17.370092461Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 23:38:17.371229 containerd[2003]: time="2026-01-21T23:38:17.370112035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 23:38:17.371359 kubelet[3515]: E0121 23:38:17.371126 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:38:17.371359 kubelet[3515]: E0121 23:38:17.371192 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:38:17.373279 kubelet[3515]: E0121 23:38:17.373181 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8bfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5468c6d76d-ffj6z_calico-apiserver(920c851c-0448-41e7-8aac-ea1379198aa5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 23:38:17.375383 kubelet[3515]: E0121 23:38:17.375311 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5468c6d76d-ffj6z" podUID="920c851c-0448-41e7-8aac-ea1379198aa5" Jan 21 23:38:17.707000 audit[5848]: USER_ACCT pid=5848 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:17.716344 sshd[5848]: Accepted publickey for core from 68.220.241.50 port 53294 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:38:17.720177 sshd-session[5848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:38:17.717000 audit[5848]: CRED_ACQ pid=5848 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:17.721017 kernel: audit: type=1101 audit(1769038697.707:838): pid=5848 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:17.733967 kernel: audit: type=1103 audit(1769038697.717:839): pid=5848 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:17.734140 kernel: audit: type=1006 audit(1769038697.717:840): pid=5848 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 21 23:38:17.745113 kernel: audit: type=1300 audit(1769038697.717:840): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe71fd9d0 a2=3 a3=0 items=0 ppid=1 pid=5848 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:17.717000 audit[5848]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe71fd9d0 a2=3 a3=0 items=0 ppid=1 pid=5848 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:17.748606 systemd-logind[1971]: New session 15 of user core. Jan 21 23:38:17.717000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:38:17.753465 kernel: audit: type=1327 audit(1769038697.717:840): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:38:17.755366 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 21 23:38:17.766000 audit[5848]: USER_START pid=5848 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:17.775000 audit[5851]: CRED_ACQ pid=5851 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:17.784027 kernel: audit: type=1105 audit(1769038697.766:841): pid=5848 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:17.784163 kernel: audit: type=1103 audit(1769038697.775:842): pid=5851 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:18.133681 sshd[5851]: Connection closed by 68.220.241.50 port 53294 Jan 21 23:38:18.135360 sshd-session[5848]: pam_unix(sshd:session): session closed for user core Jan 21 23:38:18.138000 audit[5848]: USER_END pid=5848 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:18.147271 systemd[1]: sshd@14-172.31.29.34:22-68.220.241.50:53294.service: Deactivated successfully. Jan 21 23:38:18.138000 audit[5848]: CRED_DISP pid=5848 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:18.156892 kernel: audit: type=1106 audit(1769038698.138:843): pid=5848 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:18.157057 kernel: audit: type=1104 audit(1769038698.138:844): pid=5848 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:18.157221 systemd[1]: session-15.scope: Deactivated successfully. Jan 21 23:38:18.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.29.34:22-68.220.241.50:53294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:18.160657 systemd-logind[1971]: Session 15 logged out. Waiting for processes to exit. Jan 21 23:38:18.165721 systemd-logind[1971]: Removed session 15. Jan 21 23:38:21.102776 containerd[2003]: time="2026-01-21T23:38:21.102707707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 23:38:21.373253 containerd[2003]: time="2026-01-21T23:38:21.372443220Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:38:21.374918 containerd[2003]: time="2026-01-21T23:38:21.374670662Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 23:38:21.374918 containerd[2003]: time="2026-01-21T23:38:21.374846398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 23:38:21.376079 kubelet[3515]: E0121 23:38:21.375313 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:38:21.376079 kubelet[3515]: E0121 23:38:21.375388 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:38:21.376079 kubelet[3515]: E0121 23:38:21.375572 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrr6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84667c98fc-fz867_calico-apiserver(20617c9b-94b4-4cb3-a6b6-dc13407eb549): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 23:38:21.378059 kubelet[3515]: E0121 23:38:21.377067 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84667c98fc-fz867" podUID="20617c9b-94b4-4cb3-a6b6-dc13407eb549" Jan 21 23:38:23.225710 systemd[1]: Started sshd@15-172.31.29.34:22-68.220.241.50:53400.service - OpenSSH per-connection server daemon (68.220.241.50:53400). Jan 21 23:38:23.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.29.34:22-68.220.241.50:53400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:23.228247 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 23:38:23.228349 kernel: audit: type=1130 audit(1769038703.225:846): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.29.34:22-68.220.241.50:53400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:23.707000 audit[5864]: USER_ACCT pid=5864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:23.709678 sshd[5864]: Accepted publickey for core from 68.220.241.50 port 53400 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:38:23.715034 kernel: audit: type=1101 audit(1769038703.707:847): pid=5864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:23.715000 audit[5864]: CRED_ACQ pid=5864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:23.719093 sshd-session[5864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:38:23.725463 kernel: audit: type=1103 audit(1769038703.715:848): pid=5864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:23.725607 kernel: audit: type=1006 audit(1769038703.717:849): pid=5864 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 21 23:38:23.717000 audit[5864]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe02443d0 a2=3 a3=0 items=0 ppid=1 pid=5864 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:23.732217 kernel: audit: type=1300 audit(1769038703.717:849): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe02443d0 a2=3 a3=0 items=0 ppid=1 pid=5864 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:23.717000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:38:23.738046 kernel: audit: type=1327 audit(1769038703.717:849): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:38:23.739685 systemd-logind[1971]: New session 16 of user core. Jan 21 23:38:23.748362 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 21 23:38:23.760000 audit[5864]: USER_START pid=5864 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:23.764000 audit[5867]: CRED_ACQ pid=5867 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:23.773837 kernel: audit: type=1105 audit(1769038703.760:850): pid=5864 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:23.774154 kernel: audit: type=1103 audit(1769038703.764:851): pid=5867 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:24.102690 containerd[2003]: time="2026-01-21T23:38:24.102622936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 21 23:38:24.105055 kubelet[3515]: E0121 23:38:24.104664 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64b9b9d79c-64kd2" podUID="b82636a0-c786-4a73-90fe-e05d2e69a656" Jan 21 23:38:24.195042 sshd[5867]: Connection closed by 68.220.241.50 port 53400 Jan 21 23:38:24.196314 sshd-session[5864]: pam_unix(sshd:session): session closed for user core Jan 21 23:38:24.202000 audit[5864]: USER_END pid=5864 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:24.209396 systemd[1]: sshd@15-172.31.29.34:22-68.220.241.50:53400.service: Deactivated successfully. Jan 21 23:38:24.202000 audit[5864]: CRED_DISP pid=5864 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:24.216244 kernel: audit: type=1106 audit(1769038704.202:852): pid=5864 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:24.216389 kernel: audit: type=1104 audit(1769038704.202:853): pid=5864 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:24.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.29.34:22-68.220.241.50:53400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:24.219681 systemd[1]: session-16.scope: Deactivated successfully. Jan 21 23:38:24.229378 systemd-logind[1971]: Session 16 logged out. Waiting for processes to exit. Jan 21 23:38:24.233357 systemd-logind[1971]: Removed session 16. Jan 21 23:38:24.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.29.34:22-68.220.241.50:53410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:24.292271 systemd[1]: Started sshd@16-172.31.29.34:22-68.220.241.50:53410.service - OpenSSH per-connection server daemon (68.220.241.50:53410). Jan 21 23:38:24.433272 containerd[2003]: time="2026-01-21T23:38:24.432656605Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:38:24.435039 containerd[2003]: time="2026-01-21T23:38:24.434856089Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 21 23:38:24.435039 containerd[2003]: time="2026-01-21T23:38:24.434927873Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 21 23:38:24.435474 kubelet[3515]: E0121 23:38:24.435394 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 23:38:24.435474 kubelet[3515]: E0121 23:38:24.435464 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 23:38:24.436790 kubelet[3515]: E0121 23:38:24.435679 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxswv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-spv6f_calico-system(9c98ebba-3094-4d44-b58e-8378134e1be8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 21 23:38:24.442755 containerd[2003]: time="2026-01-21T23:38:24.442563062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 21 23:38:24.692900 containerd[2003]: time="2026-01-21T23:38:24.692706832Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:38:24.696002 containerd[2003]: time="2026-01-21T23:38:24.695650975Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 21 23:38:24.696002 containerd[2003]: time="2026-01-21T23:38:24.695716702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 21 23:38:24.697003 kubelet[3515]: E0121 23:38:24.696692 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 23:38:24.697538 kubelet[3515]: E0121 23:38:24.697295 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 23:38:24.699267 kubelet[3515]: E0121 23:38:24.699048 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxswv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-spv6f_calico-system(9c98ebba-3094-4d44-b58e-8378134e1be8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 21 23:38:24.700477 kubelet[3515]: E0121 23:38:24.700340 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-spv6f" podUID="9c98ebba-3094-4d44-b58e-8378134e1be8" Jan 21 23:38:24.796000 audit[5878]: USER_ACCT pid=5878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:24.800057 sshd[5878]: Accepted publickey for core from 68.220.241.50 port 53410 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:38:24.803230 sshd-session[5878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:38:24.801000 audit[5878]: CRED_ACQ pid=5878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:24.801000 audit[5878]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe47be000 a2=3 a3=0 items=0 ppid=1 pid=5878 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:24.801000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:38:24.820066 systemd-logind[1971]: New session 17 of user core. Jan 21 23:38:24.827351 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 21 23:38:24.835000 audit[5878]: USER_START pid=5878 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:24.839000 audit[5881]: CRED_ACQ pid=5881 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:25.492007 sshd[5881]: Connection closed by 68.220.241.50 port 53410 Jan 21 23:38:25.491867 sshd-session[5878]: pam_unix(sshd:session): session closed for user core Jan 21 23:38:25.497000 audit[5878]: USER_END pid=5878 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:25.498000 audit[5878]: CRED_DISP pid=5878 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:25.504307 systemd[1]: sshd@16-172.31.29.34:22-68.220.241.50:53410.service: Deactivated successfully. Jan 21 23:38:25.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.29.34:22-68.220.241.50:53410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:25.513187 systemd[1]: session-17.scope: Deactivated successfully. Jan 21 23:38:25.520029 systemd-logind[1971]: Session 17 logged out. Waiting for processes to exit. Jan 21 23:38:25.525404 systemd-logind[1971]: Removed session 17. Jan 21 23:38:25.593523 systemd[1]: Started sshd@17-172.31.29.34:22-68.220.241.50:53420.service - OpenSSH per-connection server daemon (68.220.241.50:53420). Jan 21 23:38:25.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.29.34:22-68.220.241.50:53420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:26.090000 audit[5891]: USER_ACCT pid=5891 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:26.091395 sshd[5891]: Accepted publickey for core from 68.220.241.50 port 53420 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:38:26.092000 audit[5891]: CRED_ACQ pid=5891 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:26.092000 audit[5891]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff4551640 a2=3 a3=0 items=0 ppid=1 pid=5891 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:26.092000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:38:26.094221 sshd-session[5891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:38:26.110084 systemd-logind[1971]: New session 18 of user core. Jan 21 23:38:26.114820 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 21 23:38:26.122000 audit[5891]: USER_START pid=5891 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:26.126000 audit[5894]: CRED_ACQ pid=5894 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:27.607000 audit[5918]: NETFILTER_CFG table=filter:149 family=2 entries=14 op=nft_register_rule pid=5918 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:38:27.607000 audit[5918]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffc05ddf0 a2=0 a3=1 items=0 ppid=3660 pid=5918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:27.607000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:38:27.615661 sshd[5894]: Connection closed by 68.220.241.50 port 53420 Jan 21 23:38:27.614000 audit[5918]: NETFILTER_CFG table=nat:150 family=2 entries=20 op=nft_register_rule pid=5918 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:38:27.614000 audit[5918]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffffc05ddf0 a2=0 a3=1 items=0 ppid=3660 pid=5918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:27.614000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:38:27.618710 sshd-session[5891]: pam_unix(sshd:session): session closed for user core Jan 21 23:38:27.623000 audit[5891]: USER_END pid=5891 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:27.625000 audit[5891]: CRED_DISP pid=5891 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:27.631653 systemd[1]: sshd@17-172.31.29.34:22-68.220.241.50:53420.service: Deactivated successfully. Jan 21 23:38:27.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.29.34:22-68.220.241.50:53420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:27.638397 systemd[1]: session-18.scope: Deactivated successfully. Jan 21 23:38:27.646295 systemd-logind[1971]: Session 18 logged out. Waiting for processes to exit. Jan 21 23:38:27.652103 systemd-logind[1971]: Removed session 18. Jan 21 23:38:27.684000 audit[5923]: NETFILTER_CFG table=filter:151 family=2 entries=26 op=nft_register_rule pid=5923 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:38:27.684000 audit[5923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffcf606720 a2=0 a3=1 items=0 ppid=3660 pid=5923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:27.684000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:38:27.697000 audit[5923]: NETFILTER_CFG table=nat:152 family=2 entries=20 op=nft_register_rule pid=5923 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:38:27.697000 audit[5923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffcf606720 a2=0 a3=1 items=0 ppid=3660 pid=5923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:27.697000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:38:27.712462 systemd[1]: Started sshd@18-172.31.29.34:22-68.220.241.50:53422.service - OpenSSH per-connection server daemon (68.220.241.50:53422). Jan 21 23:38:27.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.29.34:22-68.220.241.50:53422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:28.103134 kubelet[3515]: E0121 23:38:28.103053 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c7587b9f-qxd5v" podUID="ca6e1be7-3778-4e58-b701-59e16c774819" Jan 21 23:38:28.198000 audit[5925]: USER_ACCT pid=5925 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:28.200287 sshd[5925]: Accepted publickey for core from 68.220.241.50 port 53422 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:38:28.200000 audit[5925]: CRED_ACQ pid=5925 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:28.201000 audit[5925]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd943d310 a2=3 a3=0 items=0 ppid=1 pid=5925 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:28.201000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:38:28.203820 sshd-session[5925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:38:28.219467 systemd-logind[1971]: New session 19 of user core. Jan 21 23:38:28.229332 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 21 23:38:28.247593 kernel: kauditd_printk_skb: 41 callbacks suppressed Jan 21 23:38:28.247751 kernel: audit: type=1105 audit(1769038708.238:881): pid=5925 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:28.238000 audit[5925]: USER_START pid=5925 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:28.250000 audit[5928]: CRED_ACQ pid=5928 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:28.260036 kernel: audit: type=1103 audit(1769038708.250:882): pid=5928 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:28.938856 sshd[5928]: Connection closed by 68.220.241.50 port 53422 Jan 21 23:38:28.939779 sshd-session[5925]: pam_unix(sshd:session): session closed for user core Jan 21 23:38:28.941000 audit[5925]: USER_END pid=5925 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:28.948000 audit[5925]: CRED_DISP pid=5925 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:28.957420 systemd[1]: sshd@18-172.31.29.34:22-68.220.241.50:53422.service: Deactivated successfully. Jan 21 23:38:28.958453 kernel: audit: type=1106 audit(1769038708.941:883): pid=5925 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:28.958553 kernel: audit: type=1104 audit(1769038708.948:884): pid=5925 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:28.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.29.34:22-68.220.241.50:53422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:28.964577 systemd[1]: session-19.scope: Deactivated successfully. Jan 21 23:38:28.965914 kernel: audit: type=1131 audit(1769038708.957:885): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.29.34:22-68.220.241.50:53422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:28.971237 systemd-logind[1971]: Session 19 logged out. Waiting for processes to exit. Jan 21 23:38:28.974482 systemd-logind[1971]: Removed session 19. Jan 21 23:38:29.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.29.34:22-68.220.241.50:53438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:29.033536 systemd[1]: Started sshd@19-172.31.29.34:22-68.220.241.50:53438.service - OpenSSH per-connection server daemon (68.220.241.50:53438). Jan 21 23:38:29.045031 kernel: audit: type=1130 audit(1769038709.032:886): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.29.34:22-68.220.241.50:53438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:29.514000 audit[5937]: USER_ACCT pid=5937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:29.516202 sshd[5937]: Accepted publickey for core from 68.220.241.50 port 53438 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:38:29.523040 kernel: audit: type=1101 audit(1769038709.514:887): pid=5937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:29.523000 audit[5937]: CRED_ACQ pid=5937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:29.527549 sshd-session[5937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:38:29.533858 kernel: audit: type=1103 audit(1769038709.523:888): pid=5937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:29.534051 kernel: audit: type=1006 audit(1769038709.523:889): pid=5937 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 21 23:38:29.534606 kernel: audit: type=1300 audit(1769038709.523:889): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffe323990 a2=3 a3=0 items=0 ppid=1 pid=5937 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:29.523000 audit[5937]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffe323990 a2=3 a3=0 items=0 ppid=1 pid=5937 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:29.523000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:38:29.550061 systemd-logind[1971]: New session 20 of user core. Jan 21 23:38:29.556753 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 21 23:38:29.564000 audit[5937]: USER_START pid=5937 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:29.567000 audit[5940]: CRED_ACQ pid=5940 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:29.920522 sshd[5940]: Connection closed by 68.220.241.50 port 53438 Jan 21 23:38:29.920909 sshd-session[5937]: pam_unix(sshd:session): session closed for user core Jan 21 23:38:29.925000 audit[5937]: USER_END pid=5937 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:29.926000 audit[5937]: CRED_DISP pid=5937 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:29.935171 systemd[1]: sshd@19-172.31.29.34:22-68.220.241.50:53438.service: Deactivated successfully. Jan 21 23:38:29.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.29.34:22-68.220.241.50:53438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:29.941886 systemd[1]: session-20.scope: Deactivated successfully. Jan 21 23:38:29.945097 systemd-logind[1971]: Session 20 logged out. Waiting for processes to exit. Jan 21 23:38:29.952247 systemd-logind[1971]: Removed session 20. Jan 21 23:38:30.105465 kubelet[3515]: E0121 23:38:30.105241 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jq2tn" podUID="c9be5f6a-b238-4709-b078-d405c449b532" Jan 21 23:38:31.099952 kubelet[3515]: E0121 23:38:31.099885 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84667c98fc-g2rqz" podUID="de79900c-2c4b-48e5-9995-14ed014509c5" Jan 21 23:38:31.100896 kubelet[3515]: E0121 23:38:31.100121 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5468c6d76d-ffj6z" podUID="920c851c-0448-41e7-8aac-ea1379198aa5" Jan 21 23:38:34.347000 audit[5977]: NETFILTER_CFG table=filter:153 family=2 entries=26 op=nft_register_rule pid=5977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:38:34.354769 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 21 23:38:34.354894 kernel: audit: type=1325 audit(1769038714.347:895): table=filter:153 family=2 entries=26 op=nft_register_rule pid=5977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:38:34.347000 audit[5977]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffefaca730 a2=0 a3=1 items=0 ppid=3660 pid=5977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:34.361850 kernel: audit: type=1300 audit(1769038714.347:895): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffefaca730 a2=0 a3=1 items=0 ppid=3660 pid=5977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:34.347000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:38:34.365493 kernel: audit: type=1327 audit(1769038714.347:895): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:38:34.374000 audit[5977]: NETFILTER_CFG table=nat:154 family=2 entries=104 op=nft_register_chain pid=5977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:38:34.383051 kernel: audit: type=1325 audit(1769038714.374:896): table=nat:154 family=2 entries=104 op=nft_register_chain pid=5977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 23:38:34.374000 audit[5977]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffefaca730 a2=0 a3=1 items=0 ppid=3660 pid=5977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:34.374000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:38:34.405368 kernel: audit: type=1300 audit(1769038714.374:896): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffefaca730 a2=0 a3=1 items=0 ppid=3660 pid=5977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:34.405469 kernel: audit: type=1327 audit(1769038714.374:896): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 23:38:35.012885 systemd[1]: Started sshd@20-172.31.29.34:22-68.220.241.50:58144.service - OpenSSH per-connection server daemon (68.220.241.50:58144). Jan 21 23:38:35.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.29.34:22-68.220.241.50:58144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:35.024047 kernel: audit: type=1130 audit(1769038715.012:897): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.29.34:22-68.220.241.50:58144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:35.100616 kubelet[3515]: E0121 23:38:35.100453 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84667c98fc-fz867" podUID="20617c9b-94b4-4cb3-a6b6-dc13407eb549" Jan 21 23:38:35.485000 audit[5979]: USER_ACCT pid=5979 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:35.493268 sshd[5979]: Accepted publickey for core from 68.220.241.50 port 58144 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:38:35.497049 kernel: audit: type=1101 audit(1769038715.485:898): pid=5979 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:35.497653 sshd-session[5979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:38:35.495000 audit[5979]: CRED_ACQ pid=5979 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:35.508910 kernel: audit: type=1103 audit(1769038715.495:899): pid=5979 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:35.509051 kernel: audit: type=1006 audit(1769038715.495:900): pid=5979 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 21 23:38:35.495000 audit[5979]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc10a5a40 a2=3 a3=0 items=0 ppid=1 pid=5979 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:35.495000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:38:35.522092 systemd-logind[1971]: New session 21 of user core. Jan 21 23:38:35.528067 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 21 23:38:35.535000 audit[5979]: USER_START pid=5979 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:35.539000 audit[5982]: CRED_ACQ pid=5982 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:35.934909 sshd[5982]: Connection closed by 68.220.241.50 port 58144 Jan 21 23:38:35.934785 sshd-session[5979]: pam_unix(sshd:session): session closed for user core Jan 21 23:38:35.939000 audit[5979]: USER_END pid=5979 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:35.940000 audit[5979]: CRED_DISP pid=5979 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:35.947634 systemd[1]: sshd@20-172.31.29.34:22-68.220.241.50:58144.service: Deactivated successfully. Jan 21 23:38:35.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.29.34:22-68.220.241.50:58144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:35.955599 systemd[1]: session-21.scope: Deactivated successfully. Jan 21 23:38:35.960687 systemd-logind[1971]: Session 21 logged out. Waiting for processes to exit. Jan 21 23:38:35.964504 systemd-logind[1971]: Removed session 21. Jan 21 23:38:37.100506 kubelet[3515]: E0121 23:38:37.100390 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-spv6f" podUID="9c98ebba-3094-4d44-b58e-8378134e1be8" Jan 21 23:38:38.106304 kubelet[3515]: E0121 23:38:38.105567 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64b9b9d79c-64kd2" podUID="b82636a0-c786-4a73-90fe-e05d2e69a656" Jan 21 23:38:40.101642 kubelet[3515]: E0121 23:38:40.101140 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c7587b9f-qxd5v" podUID="ca6e1be7-3778-4e58-b701-59e16c774819" Jan 21 23:38:41.033752 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 21 23:38:41.033908 kernel: audit: type=1130 audit(1769038721.026:906): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.29.34:22-68.220.241.50:58154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:41.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.29.34:22-68.220.241.50:58154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:41.026956 systemd[1]: Started sshd@21-172.31.29.34:22-68.220.241.50:58154.service - OpenSSH per-connection server daemon (68.220.241.50:58154). Jan 21 23:38:41.104029 kubelet[3515]: E0121 23:38:41.103163 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jq2tn" podUID="c9be5f6a-b238-4709-b078-d405c449b532" Jan 21 23:38:41.509000 audit[5998]: USER_ACCT pid=5998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:41.511118 sshd[5998]: Accepted publickey for core from 68.220.241.50 port 58154 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:38:41.518638 sshd-session[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:38:41.526267 kernel: audit: type=1101 audit(1769038721.509:907): pid=5998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:41.526415 kernel: audit: type=1103 audit(1769038721.516:908): pid=5998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:41.516000 audit[5998]: CRED_ACQ pid=5998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:41.533286 kernel: audit: type=1006 audit(1769038721.516:909): pid=5998 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 21 23:38:41.516000 audit[5998]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffe8541f0 a2=3 a3=0 items=0 ppid=1 pid=5998 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:41.541408 kernel: audit: type=1300 audit(1769038721.516:909): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffe8541f0 a2=3 a3=0 items=0 ppid=1 pid=5998 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:41.516000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:38:41.546289 kernel: audit: type=1327 audit(1769038721.516:909): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:38:41.549074 systemd-logind[1971]: New session 22 of user core. Jan 21 23:38:41.554314 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 21 23:38:41.562000 audit[5998]: USER_START pid=5998 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:41.570000 audit[6002]: CRED_ACQ pid=6002 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:41.572111 kernel: audit: type=1105 audit(1769038721.562:910): pid=5998 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:41.578008 kernel: audit: type=1103 audit(1769038721.570:911): pid=6002 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:41.915453 sshd[6002]: Connection closed by 68.220.241.50 port 58154 Jan 21 23:38:41.915293 sshd-session[5998]: pam_unix(sshd:session): session closed for user core Jan 21 23:38:41.918000 audit[5998]: USER_END pid=5998 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:41.918000 audit[5998]: CRED_DISP pid=5998 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:41.931346 systemd[1]: sshd@21-172.31.29.34:22-68.220.241.50:58154.service: Deactivated successfully. Jan 21 23:38:41.938324 kernel: audit: type=1106 audit(1769038721.918:912): pid=5998 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:41.938457 kernel: audit: type=1104 audit(1769038721.918:913): pid=5998 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:41.938996 systemd[1]: session-22.scope: Deactivated successfully. Jan 21 23:38:41.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.29.34:22-68.220.241.50:58154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:41.946485 systemd-logind[1971]: Session 22 logged out. Waiting for processes to exit. Jan 21 23:38:41.950336 systemd-logind[1971]: Removed session 22. Jan 21 23:38:43.100412 kubelet[3515]: E0121 23:38:43.100358 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5468c6d76d-ffj6z" podUID="920c851c-0448-41e7-8aac-ea1379198aa5" Jan 21 23:38:45.101448 kubelet[3515]: E0121 23:38:45.100631 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84667c98fc-g2rqz" podUID="de79900c-2c4b-48e5-9995-14ed014509c5" Jan 21 23:38:47.014761 systemd[1]: Started sshd@22-172.31.29.34:22-68.220.241.50:57268.service - OpenSSH per-connection server daemon (68.220.241.50:57268). Jan 21 23:38:47.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.29.34:22-68.220.241.50:57268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:47.017252 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 23:38:47.017396 kernel: audit: type=1130 audit(1769038727.014:915): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.29.34:22-68.220.241.50:57268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:47.494843 sshd[6014]: Accepted publickey for core from 68.220.241.50 port 57268 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:38:47.492000 audit[6014]: USER_ACCT pid=6014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:47.503913 sshd-session[6014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:38:47.501000 audit[6014]: CRED_ACQ pid=6014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:47.510801 kernel: audit: type=1101 audit(1769038727.492:916): pid=6014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:47.510906 kernel: audit: type=1103 audit(1769038727.501:917): pid=6014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:47.519162 kernel: audit: type=1006 audit(1769038727.501:918): pid=6014 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 21 23:38:47.501000 audit[6014]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc62ee490 a2=3 a3=0 items=0 ppid=1 pid=6014 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:47.528872 kernel: audit: type=1300 audit(1769038727.501:918): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc62ee490 a2=3 a3=0 items=0 ppid=1 pid=6014 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:47.529869 systemd-logind[1971]: New session 23 of user core. Jan 21 23:38:47.532336 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 21 23:38:47.501000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:38:47.536056 kernel: audit: type=1327 audit(1769038727.501:918): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:38:47.540000 audit[6014]: USER_START pid=6014 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:47.549551 kernel: audit: type=1105 audit(1769038727.540:919): pid=6014 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:47.547000 audit[6017]: CRED_ACQ pid=6017 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:47.556143 kernel: audit: type=1103 audit(1769038727.547:920): pid=6017 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:47.921551 sshd[6017]: Connection closed by 68.220.241.50 port 57268 Jan 21 23:38:47.922331 sshd-session[6014]: pam_unix(sshd:session): session closed for user core Jan 21 23:38:47.926000 audit[6014]: USER_END pid=6014 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:47.933199 systemd[1]: sshd@22-172.31.29.34:22-68.220.241.50:57268.service: Deactivated successfully. Jan 21 23:38:47.926000 audit[6014]: CRED_DISP pid=6014 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:47.940061 kernel: audit: type=1106 audit(1769038727.926:921): pid=6014 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:47.943695 systemd[1]: session-23.scope: Deactivated successfully. Jan 21 23:38:47.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.29.34:22-68.220.241.50:57268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:47.948030 kernel: audit: type=1104 audit(1769038727.926:922): pid=6014 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:47.949952 systemd-logind[1971]: Session 23 logged out. Waiting for processes to exit. Jan 21 23:38:47.953763 systemd-logind[1971]: Removed session 23. Jan 21 23:38:50.101318 kubelet[3515]: E0121 23:38:50.100212 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84667c98fc-fz867" podUID="20617c9b-94b4-4cb3-a6b6-dc13407eb549" Jan 21 23:38:51.106822 kubelet[3515]: E0121 23:38:51.106389 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64b9b9d79c-64kd2" podUID="b82636a0-c786-4a73-90fe-e05d2e69a656" Jan 21 23:38:51.108750 kubelet[3515]: E0121 23:38:51.108552 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-spv6f" podUID="9c98ebba-3094-4d44-b58e-8378134e1be8" Jan 21 23:38:52.102027 kubelet[3515]: E0121 23:38:52.101134 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c7587b9f-qxd5v" podUID="ca6e1be7-3778-4e58-b701-59e16c774819" Jan 21 23:38:52.102027 kubelet[3515]: E0121 23:38:52.101933 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jq2tn" podUID="c9be5f6a-b238-4709-b078-d405c449b532" Jan 21 23:38:53.013457 systemd[1]: Started sshd@23-172.31.29.34:22-68.220.241.50:42998.service - OpenSSH per-connection server daemon (68.220.241.50:42998). Jan 21 23:38:53.020257 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 23:38:53.020362 kernel: audit: type=1130 audit(1769038733.012:924): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.29.34:22-68.220.241.50:42998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:53.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.29.34:22-68.220.241.50:42998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:53.478000 audit[6030]: USER_ACCT pid=6030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:53.479426 sshd[6030]: Accepted publickey for core from 68.220.241.50 port 42998 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:38:53.485000 audit[6030]: CRED_ACQ pid=6030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:53.489111 sshd-session[6030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:38:53.492650 kernel: audit: type=1101 audit(1769038733.478:925): pid=6030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:53.492776 kernel: audit: type=1103 audit(1769038733.485:926): pid=6030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:53.492847 kernel: audit: type=1006 audit(1769038733.485:927): pid=6030 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 21 23:38:53.497168 kernel: audit: type=1300 audit(1769038733.485:927): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdaa8e9f0 a2=3 a3=0 items=0 ppid=1 pid=6030 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:53.485000 audit[6030]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdaa8e9f0 a2=3 a3=0 items=0 ppid=1 pid=6030 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:53.485000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:38:53.505037 kernel: audit: type=1327 audit(1769038733.485:927): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:38:53.514075 systemd-logind[1971]: New session 24 of user core. Jan 21 23:38:53.520357 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 21 23:38:53.529000 audit[6030]: USER_START pid=6030 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:53.540060 kernel: audit: type=1105 audit(1769038733.529:928): pid=6030 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:53.540000 audit[6033]: CRED_ACQ pid=6033 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:53.548029 kernel: audit: type=1103 audit(1769038733.540:929): pid=6033 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:53.908785 sshd[6033]: Connection closed by 68.220.241.50 port 42998 Jan 21 23:38:53.909251 sshd-session[6030]: pam_unix(sshd:session): session closed for user core Jan 21 23:38:53.912000 audit[6030]: USER_END pid=6030 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:53.912000 audit[6030]: CRED_DISP pid=6030 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:53.928266 systemd[1]: sshd@23-172.31.29.34:22-68.220.241.50:42998.service: Deactivated successfully. Jan 21 23:38:53.932456 kernel: audit: type=1106 audit(1769038733.912:930): pid=6030 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:53.932620 kernel: audit: type=1104 audit(1769038733.912:931): pid=6030 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:53.934610 systemd[1]: session-24.scope: Deactivated successfully. Jan 21 23:38:53.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.29.34:22-68.220.241.50:42998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:53.940512 systemd-logind[1971]: Session 24 logged out. Waiting for processes to exit. Jan 21 23:38:53.945861 systemd-logind[1971]: Removed session 24. Jan 21 23:38:54.101273 kubelet[3515]: E0121 23:38:54.101203 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5468c6d76d-ffj6z" podUID="920c851c-0448-41e7-8aac-ea1379198aa5" Jan 21 23:38:59.001073 systemd[1]: Started sshd@24-172.31.29.34:22-68.220.241.50:43000.service - OpenSSH per-connection server daemon (68.220.241.50:43000). Jan 21 23:38:59.003015 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 23:38:59.003160 kernel: audit: type=1130 audit(1769038739.000:933): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.29.34:22-68.220.241.50:43000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:59.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.29.34:22-68.220.241.50:43000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:59.481000 audit[6053]: USER_ACCT pid=6053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:59.488836 sshd[6053]: Accepted publickey for core from 68.220.241.50 port 43000 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:38:59.489447 kernel: audit: type=1101 audit(1769038739.481:934): pid=6053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:59.490000 audit[6053]: CRED_ACQ pid=6053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:59.497832 sshd-session[6053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:38:59.501605 kernel: audit: type=1103 audit(1769038739.490:935): pid=6053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:59.501719 kernel: audit: type=1006 audit(1769038739.490:936): pid=6053 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 21 23:38:59.490000 audit[6053]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc5610e30 a2=3 a3=0 items=0 ppid=1 pid=6053 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:59.508231 kernel: audit: type=1300 audit(1769038739.490:936): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc5610e30 a2=3 a3=0 items=0 ppid=1 pid=6053 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:38:59.490000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:38:59.511145 kernel: audit: type=1327 audit(1769038739.490:936): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:38:59.521644 systemd-logind[1971]: New session 25 of user core. Jan 21 23:38:59.531402 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 21 23:38:59.538000 audit[6053]: USER_START pid=6053 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:59.548000 audit[6056]: CRED_ACQ pid=6056 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:59.550317 kernel: audit: type=1105 audit(1769038739.538:937): pid=6053 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:59.556158 kernel: audit: type=1103 audit(1769038739.548:938): pid=6056 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:59.878036 sshd[6056]: Connection closed by 68.220.241.50 port 43000 Jan 21 23:38:59.878953 sshd-session[6053]: pam_unix(sshd:session): session closed for user core Jan 21 23:38:59.883000 audit[6053]: USER_END pid=6053 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:59.883000 audit[6053]: CRED_DISP pid=6053 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:59.908051 kernel: audit: type=1106 audit(1769038739.883:939): pid=6053 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:59.908194 kernel: audit: type=1104 audit(1769038739.883:940): pid=6053 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:38:59.901650 systemd[1]: sshd@24-172.31.29.34:22-68.220.241.50:43000.service: Deactivated successfully. Jan 21 23:38:59.905971 systemd[1]: session-25.scope: Deactivated successfully. Jan 21 23:38:59.910445 systemd-logind[1971]: Session 25 logged out. Waiting for processes to exit. Jan 21 23:38:59.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.29.34:22-68.220.241.50:43000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:38:59.916317 systemd-logind[1971]: Removed session 25. Jan 21 23:39:00.104178 containerd[2003]: time="2026-01-21T23:39:00.103047998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 23:39:00.383795 containerd[2003]: time="2026-01-21T23:39:00.383717481Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:39:00.386182 containerd[2003]: time="2026-01-21T23:39:00.386037613Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 23:39:00.386182 containerd[2003]: time="2026-01-21T23:39:00.386113055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 23:39:00.386454 kubelet[3515]: E0121 23:39:00.386371 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:39:00.388126 kubelet[3515]: E0121 23:39:00.386465 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:39:00.388126 kubelet[3515]: E0121 23:39:00.386814 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8d8fl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84667c98fc-g2rqz_calico-apiserver(de79900c-2c4b-48e5-9995-14ed014509c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 23:39:00.388482 kubelet[3515]: E0121 23:39:00.388240 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84667c98fc-g2rqz" podUID="de79900c-2c4b-48e5-9995-14ed014509c5" Jan 21 23:39:04.103507 containerd[2003]: time="2026-01-21T23:39:04.103375357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 23:39:04.396837 containerd[2003]: time="2026-01-21T23:39:04.396361564Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:39:04.398656 containerd[2003]: time="2026-01-21T23:39:04.398576879Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 23:39:04.398809 containerd[2003]: time="2026-01-21T23:39:04.398713383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 23:39:04.401157 kubelet[3515]: E0121 23:39:04.399033 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 23:39:04.401157 kubelet[3515]: E0121 23:39:04.399109 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 23:39:04.401157 kubelet[3515]: E0121 23:39:04.399432 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdxpq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jq2tn_calico-system(c9be5f6a-b238-4709-b078-d405c449b532): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 23:39:04.402708 kubelet[3515]: E0121 23:39:04.400695 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jq2tn" podUID="c9be5f6a-b238-4709-b078-d405c449b532" Jan 21 23:39:04.402918 containerd[2003]: time="2026-01-21T23:39:04.402264051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 23:39:04.670569 containerd[2003]: time="2026-01-21T23:39:04.669705052Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:39:04.672026 containerd[2003]: time="2026-01-21T23:39:04.671924170Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 23:39:04.672183 containerd[2003]: time="2026-01-21T23:39:04.672100998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 23:39:04.672695 kubelet[3515]: E0121 23:39:04.672379 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 23:39:04.672695 kubelet[3515]: E0121 23:39:04.672451 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 23:39:04.672695 kubelet[3515]: E0121 23:39:04.672606 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:24f3b654eaca42c1be474f4c2fb54f82,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bbxdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64b9b9d79c-64kd2_calico-system(b82636a0-c786-4a73-90fe-e05d2e69a656): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 23:39:04.676764 containerd[2003]: time="2026-01-21T23:39:04.676622436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 23:39:04.972666 systemd[1]: Started sshd@25-172.31.29.34:22-68.220.241.50:51800.service - OpenSSH per-connection server daemon (68.220.241.50:51800). Jan 21 23:39:04.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.29.34:22-68.220.241.50:51800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:39:04.976197 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 23:39:04.976663 kernel: audit: type=1130 audit(1769038744.971:942): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.29.34:22-68.220.241.50:51800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:39:04.977165 containerd[2003]: time="2026-01-21T23:39:04.976381498Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:39:04.979874 containerd[2003]: time="2026-01-21T23:39:04.979763674Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 23:39:04.982442 containerd[2003]: time="2026-01-21T23:39:04.982362306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 23:39:04.983249 kubelet[3515]: E0121 23:39:04.983191 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 23:39:04.984988 kubelet[3515]: E0121 23:39:04.983456 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 23:39:04.984988 kubelet[3515]: E0121 23:39:04.983628 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbxdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64b9b9d79c-64kd2_calico-system(b82636a0-c786-4a73-90fe-e05d2e69a656): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 23:39:04.985460 kubelet[3515]: E0121 23:39:04.985410 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64b9b9d79c-64kd2" podUID="b82636a0-c786-4a73-90fe-e05d2e69a656" Jan 21 23:39:05.103206 containerd[2003]: time="2026-01-21T23:39:05.103121257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 23:39:05.392957 containerd[2003]: time="2026-01-21T23:39:05.392883596Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:39:05.395254 containerd[2003]: time="2026-01-21T23:39:05.395163344Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 23:39:05.395445 containerd[2003]: time="2026-01-21T23:39:05.395290816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 23:39:05.395799 kubelet[3515]: E0121 23:39:05.395733 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:39:05.396009 kubelet[3515]: E0121 23:39:05.395944 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:39:05.396539 kubelet[3515]: E0121 23:39:05.396427 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrr6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84667c98fc-fz867_calico-apiserver(20617c9b-94b4-4cb3-a6b6-dc13407eb549): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 23:39:05.397455 containerd[2003]: time="2026-01-21T23:39:05.396781203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 23:39:05.398034 kubelet[3515]: E0121 23:39:05.397673 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84667c98fc-fz867" podUID="20617c9b-94b4-4cb3-a6b6-dc13407eb549" Jan 21 23:39:05.470000 audit[6095]: USER_ACCT pid=6095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:39:05.479230 sshd[6095]: Accepted publickey for core from 68.220.241.50 port 51800 ssh2: RSA SHA256:0Jyj3GUdfX5R/KvQW0U5h/DtK9hgsaurmbNcjb2MW2o Jan 21 23:39:05.479000 audit[6095]: CRED_ACQ pid=6095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:39:05.488585 kernel: audit: type=1101 audit(1769038745.470:943): pid=6095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:39:05.488710 kernel: audit: type=1103 audit(1769038745.479:944): pid=6095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:39:05.494252 kernel: audit: type=1006 audit(1769038745.479:945): pid=6095 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 21 23:39:05.479000 audit[6095]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdfd606e0 a2=3 a3=0 items=0 ppid=1 pid=6095 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:05.503438 kernel: audit: type=1300 audit(1769038745.479:945): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdfd606e0 a2=3 a3=0 items=0 ppid=1 pid=6095 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 23:39:05.479000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:39:05.508859 kernel: audit: type=1327 audit(1769038745.479:945): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 23:39:05.510619 sshd-session[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 23:39:05.525797 systemd-logind[1971]: New session 26 of user core. Jan 21 23:39:05.533435 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 21 23:39:05.541000 audit[6095]: USER_START pid=6095 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:39:05.551000 audit[6098]: CRED_ACQ pid=6098 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:39:05.558768 kernel: audit: type=1105 audit(1769038745.541:946): pid=6095 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:39:05.558917 kernel: audit: type=1103 audit(1769038745.551:947): pid=6098 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:39:05.680159 containerd[2003]: time="2026-01-21T23:39:05.679025415Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:39:05.684054 containerd[2003]: time="2026-01-21T23:39:05.683964232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 23:39:05.684394 containerd[2003]: time="2026-01-21T23:39:05.684007530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 23:39:05.684663 kubelet[3515]: E0121 23:39:05.684617 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:39:05.687234 kubelet[3515]: E0121 23:39:05.685275 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 23:39:05.687671 kubelet[3515]: E0121 23:39:05.687582 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8bfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5468c6d76d-ffj6z_calico-apiserver(920c851c-0448-41e7-8aac-ea1379198aa5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 23:39:05.690375 kubelet[3515]: E0121 23:39:05.690274 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5468c6d76d-ffj6z" podUID="920c851c-0448-41e7-8aac-ea1379198aa5" Jan 21 23:39:05.879999 sshd[6098]: Connection closed by 68.220.241.50 port 51800 Jan 21 23:39:05.882227 sshd-session[6095]: pam_unix(sshd:session): session closed for user core Jan 21 23:39:05.885000 audit[6095]: USER_END pid=6095 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:39:05.892000 audit[6095]: CRED_DISP pid=6095 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:39:05.898293 systemd[1]: sshd@25-172.31.29.34:22-68.220.241.50:51800.service: Deactivated successfully. Jan 21 23:39:05.902617 kernel: audit: type=1106 audit(1769038745.885:948): pid=6095 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:39:05.902715 kernel: audit: type=1104 audit(1769038745.892:949): pid=6095 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 23:39:05.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.29.34:22-68.220.241.50:51800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 23:39:05.912011 systemd[1]: session-26.scope: Deactivated successfully. Jan 21 23:39:05.918638 systemd-logind[1971]: Session 26 logged out. Waiting for processes to exit. Jan 21 23:39:05.922883 systemd-logind[1971]: Removed session 26. Jan 21 23:39:06.103751 containerd[2003]: time="2026-01-21T23:39:06.103656314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 21 23:39:06.380872 containerd[2003]: time="2026-01-21T23:39:06.380306117Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:39:06.382601 containerd[2003]: time="2026-01-21T23:39:06.382469151Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 21 23:39:06.382601 containerd[2003]: time="2026-01-21T23:39:06.382533199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 21 23:39:06.382868 kubelet[3515]: E0121 23:39:06.382784 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 23:39:06.382868 kubelet[3515]: E0121 23:39:06.382856 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 23:39:06.383655 containerd[2003]: time="2026-01-21T23:39:06.383284131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 21 23:39:06.384052 kubelet[3515]: E0121 23:39:06.383884 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bkmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8c7587b9f-qxd5v_calico-system(ca6e1be7-3778-4e58-b701-59e16c774819): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 21 23:39:06.385308 kubelet[3515]: E0121 23:39:06.385218 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c7587b9f-qxd5v" podUID="ca6e1be7-3778-4e58-b701-59e16c774819" Jan 21 23:39:06.663045 containerd[2003]: time="2026-01-21T23:39:06.660523127Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:39:06.663045 containerd[2003]: time="2026-01-21T23:39:06.662769460Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 21 23:39:06.663045 containerd[2003]: time="2026-01-21T23:39:06.662900494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 21 23:39:06.664249 kubelet[3515]: E0121 23:39:06.663893 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 23:39:06.664249 kubelet[3515]: E0121 23:39:06.663960 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 23:39:06.664249 kubelet[3515]: E0121 23:39:06.664155 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxswv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-spv6f_calico-system(9c98ebba-3094-4d44-b58e-8378134e1be8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 21 23:39:06.667720 containerd[2003]: time="2026-01-21T23:39:06.667398988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 21 23:39:06.933653 containerd[2003]: time="2026-01-21T23:39:06.932756799Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 23:39:06.935192 containerd[2003]: time="2026-01-21T23:39:06.935115107Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 21 23:39:06.935423 containerd[2003]: time="2026-01-21T23:39:06.935240661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 21 23:39:06.937138 kubelet[3515]: E0121 23:39:06.935540 3515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 23:39:06.937138 kubelet[3515]: E0121 23:39:06.935600 3515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 23:39:06.937138 kubelet[3515]: E0121 23:39:06.935760 3515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxswv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-spv6f_calico-system(9c98ebba-3094-4d44-b58e-8378134e1be8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 21 23:39:06.939075 kubelet[3515]: E0121 23:39:06.937507 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-spv6f" podUID="9c98ebba-3094-4d44-b58e-8378134e1be8"