Dec 12 17:26:08.996108 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 12 17:26:08.996154 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Dec 12 15:17:36 -00 2025 Dec 12 17:26:08.996178 kernel: KASLR disabled due to lack of seed Dec 12 17:26:08.996195 kernel: efi: EFI v2.7 by EDK II Dec 12 17:26:08.996211 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Dec 12 17:26:08.996227 kernel: secureboot: Secure boot disabled Dec 12 17:26:08.996245 kernel: ACPI: Early table checksum verification disabled Dec 12 17:26:08.996261 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 12 17:26:08.996278 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 12 17:26:08.996298 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 12 17:26:08.996315 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 12 17:26:08.996331 kernel: ACPI: FACS 0x0000000078630000 000040 Dec 12 17:26:08.996346 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 12 17:26:08.996363 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 12 17:26:08.996386 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 12 17:26:08.996403 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 12 17:26:08.996420 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 12 17:26:08.996437 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 12 17:26:08.996454 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 12 17:26:08.996471 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 12 17:26:08.996487 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 12 17:26:08.996561 kernel: printk: legacy bootconsole [uart0] enabled Dec 12 17:26:08.996579 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 12 17:26:08.996597 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 12 17:26:08.996620 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Dec 12 17:26:08.996637 kernel: Zone ranges: Dec 12 17:26:08.996653 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 12 17:26:08.996670 kernel: DMA32 empty Dec 12 17:26:08.996687 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 12 17:26:08.996703 kernel: Device empty Dec 12 17:26:08.996719 kernel: Movable zone start for each node Dec 12 17:26:08.996736 kernel: Early memory node ranges Dec 12 17:26:08.996753 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 12 17:26:08.996769 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 12 17:26:08.996786 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 12 17:26:08.996802 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 12 17:26:08.996823 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 12 17:26:08.996839 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 12 17:26:08.996856 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 12 17:26:08.996873 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 12 17:26:08.996897 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 12 17:26:08.996919 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 12 17:26:08.996937 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Dec 12 17:26:08.996955 kernel: psci: probing for conduit method from ACPI. Dec 12 17:26:08.996972 kernel: psci: PSCIv1.0 detected in firmware. Dec 12 17:26:08.996990 kernel: psci: Using standard PSCI v0.2 function IDs Dec 12 17:26:08.997007 kernel: psci: Trusted OS migration not required Dec 12 17:26:08.997025 kernel: psci: SMC Calling Convention v1.1 Dec 12 17:26:08.997043 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Dec 12 17:26:08.997060 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 12 17:26:08.997082 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 12 17:26:08.997100 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 12 17:26:08.997118 kernel: Detected PIPT I-cache on CPU0 Dec 12 17:26:08.997135 kernel: CPU features: detected: GIC system register CPU interface Dec 12 17:26:08.997153 kernel: CPU features: detected: Spectre-v2 Dec 12 17:26:08.997170 kernel: CPU features: detected: Spectre-v3a Dec 12 17:26:08.997188 kernel: CPU features: detected: Spectre-BHB Dec 12 17:26:08.997205 kernel: CPU features: detected: ARM erratum 1742098 Dec 12 17:26:08.997223 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 12 17:26:08.997240 kernel: alternatives: applying boot alternatives Dec 12 17:26:08.997260 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 12 17:26:08.997283 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 17:26:08.997301 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 17:26:08.997318 kernel: Fallback order for Node 0: 0 Dec 12 17:26:08.997336 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Dec 12 17:26:08.997354 kernel: Policy zone: Normal Dec 12 17:26:08.997371 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 17:26:08.997389 kernel: software IO TLB: area num 2. Dec 12 17:26:08.997406 kernel: software IO TLB: mapped [mem 0x000000006f800000-0x0000000073800000] (64MB) Dec 12 17:26:08.997424 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 12 17:26:08.997441 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 17:26:08.997464 kernel: rcu: RCU event tracing is enabled. Dec 12 17:26:08.997482 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 12 17:26:08.997522 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 17:26:08.997542 kernel: Tracing variant of Tasks RCU enabled. Dec 12 17:26:08.997560 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 17:26:08.997578 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 12 17:26:08.997595 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 17:26:08.997613 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 17:26:08.997631 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 12 17:26:08.997648 kernel: GICv3: 96 SPIs implemented Dec 12 17:26:08.997665 kernel: GICv3: 0 Extended SPIs implemented Dec 12 17:26:08.997689 kernel: Root IRQ handler: gic_handle_irq Dec 12 17:26:08.997706 kernel: GICv3: GICv3 features: 16 PPIs Dec 12 17:26:08.997723 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 12 17:26:08.997741 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 12 17:26:08.997758 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 12 17:26:08.997776 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Dec 12 17:26:08.997794 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Dec 12 17:26:08.997811 kernel: GICv3: using LPI property table @0x0000000400110000 Dec 12 17:26:08.997829 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 12 17:26:08.997846 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Dec 12 17:26:08.997864 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 17:26:08.997888 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 12 17:26:08.997905 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 12 17:26:08.997923 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 12 17:26:08.997943 kernel: Console: colour dummy device 80x25 Dec 12 17:26:08.997961 kernel: printk: legacy console [tty1] enabled Dec 12 17:26:08.997979 kernel: ACPI: Core revision 20240827 Dec 12 17:26:08.997998 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 12 17:26:08.998017 kernel: pid_max: default: 32768 minimum: 301 Dec 12 17:26:08.998041 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 17:26:08.998059 kernel: landlock: Up and running. Dec 12 17:26:08.998078 kernel: SELinux: Initializing. Dec 12 17:26:08.998096 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:26:08.998115 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:26:08.998133 kernel: rcu: Hierarchical SRCU implementation. Dec 12 17:26:08.998152 kernel: rcu: Max phase no-delay instances is 400. Dec 12 17:26:08.998178 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 17:26:08.998197 kernel: Remapping and enabling EFI services. Dec 12 17:26:08.998215 kernel: smp: Bringing up secondary CPUs ... Dec 12 17:26:08.998234 kernel: Detected PIPT I-cache on CPU1 Dec 12 17:26:08.998252 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 12 17:26:08.998271 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Dec 12 17:26:08.998289 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 12 17:26:08.998311 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 17:26:08.998330 kernel: SMP: Total of 2 processors activated. Dec 12 17:26:08.998348 kernel: CPU: All CPU(s) started at EL1 Dec 12 17:26:08.998377 kernel: CPU features: detected: 32-bit EL0 Support Dec 12 17:26:08.998401 kernel: CPU features: detected: 32-bit EL1 Support Dec 12 17:26:08.998420 kernel: CPU features: detected: CRC32 instructions Dec 12 17:26:08.998439 kernel: alternatives: applying system-wide alternatives Dec 12 17:26:08.998459 kernel: Memory: 3823468K/4030464K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12416K init, 1038K bss, 185652K reserved, 16384K cma-reserved) Dec 12 17:26:08.998479 kernel: devtmpfs: initialized Dec 12 17:26:09.000588 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 17:26:09.000636 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 12 17:26:09.000658 kernel: 23664 pages in range for non-PLT usage Dec 12 17:26:09.000677 kernel: 515184 pages in range for PLT usage Dec 12 17:26:09.000709 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 17:26:09.000729 kernel: SMBIOS 3.0.0 present. Dec 12 17:26:09.000749 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 12 17:26:09.000769 kernel: DMI: Memory slots populated: 0/0 Dec 12 17:26:09.000788 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 17:26:09.000807 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 12 17:26:09.000827 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 12 17:26:09.000847 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 12 17:26:09.000871 kernel: audit: initializing netlink subsys (disabled) Dec 12 17:26:09.000891 kernel: audit: type=2000 audit(0.228:1): state=initialized audit_enabled=0 res=1 Dec 12 17:26:09.000910 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 17:26:09.000929 kernel: cpuidle: using governor menu Dec 12 17:26:09.000948 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 12 17:26:09.000967 kernel: ASID allocator initialised with 65536 entries Dec 12 17:26:09.000986 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 17:26:09.001011 kernel: Serial: AMBA PL011 UART driver Dec 12 17:26:09.001030 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 17:26:09.001049 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 17:26:09.001069 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 12 17:26:09.001089 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 12 17:26:09.001108 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 17:26:09.001128 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 17:26:09.001152 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 12 17:26:09.001172 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 12 17:26:09.001191 kernel: ACPI: Added _OSI(Module Device) Dec 12 17:26:09.001210 kernel: ACPI: Added _OSI(Processor Device) Dec 12 17:26:09.001230 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 17:26:09.001251 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 17:26:09.001271 kernel: ACPI: Interpreter enabled Dec 12 17:26:09.001296 kernel: ACPI: Using GIC for interrupt routing Dec 12 17:26:09.001315 kernel: ACPI: MCFG table detected, 1 entries Dec 12 17:26:09.001334 kernel: ACPI: CPU0 has been hot-added Dec 12 17:26:09.001354 kernel: ACPI: CPU1 has been hot-added Dec 12 17:26:09.001373 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Dec 12 17:26:09.006920 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 17:26:09.007227 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 17:26:09.007558 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 17:26:09.007828 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Dec 12 17:26:09.008084 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Dec 12 17:26:09.008110 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 12 17:26:09.008131 kernel: acpiphp: Slot [1] registered Dec 12 17:26:09.008151 kernel: acpiphp: Slot [2] registered Dec 12 17:26:09.008180 kernel: acpiphp: Slot [3] registered Dec 12 17:26:09.008199 kernel: acpiphp: Slot [4] registered Dec 12 17:26:09.008218 kernel: acpiphp: Slot [5] registered Dec 12 17:26:09.008237 kernel: acpiphp: Slot [6] registered Dec 12 17:26:09.008256 kernel: acpiphp: Slot [7] registered Dec 12 17:26:09.008275 kernel: acpiphp: Slot [8] registered Dec 12 17:26:09.008294 kernel: acpiphp: Slot [9] registered Dec 12 17:26:09.008317 kernel: acpiphp: Slot [10] registered Dec 12 17:26:09.008337 kernel: acpiphp: Slot [11] registered Dec 12 17:26:09.008356 kernel: acpiphp: Slot [12] registered Dec 12 17:26:09.008374 kernel: acpiphp: Slot [13] registered Dec 12 17:26:09.008393 kernel: acpiphp: Slot [14] registered Dec 12 17:26:09.008413 kernel: acpiphp: Slot [15] registered Dec 12 17:26:09.008431 kernel: acpiphp: Slot [16] registered Dec 12 17:26:09.008451 kernel: acpiphp: Slot [17] registered Dec 12 17:26:09.008474 kernel: acpiphp: Slot [18] registered Dec 12 17:26:09.012466 kernel: acpiphp: Slot [19] registered Dec 12 17:26:09.012542 kernel: acpiphp: Slot [20] registered Dec 12 17:26:09.012565 kernel: acpiphp: Slot [21] registered Dec 12 17:26:09.012585 kernel: acpiphp: Slot [22] registered Dec 12 17:26:09.012606 kernel: acpiphp: Slot [23] registered Dec 12 17:26:09.012625 kernel: acpiphp: Slot [24] registered Dec 12 17:26:09.012656 kernel: acpiphp: Slot [25] registered Dec 12 17:26:09.012675 kernel: acpiphp: Slot [26] registered Dec 12 17:26:09.012694 kernel: acpiphp: Slot [27] registered Dec 12 17:26:09.012713 kernel: acpiphp: Slot [28] registered Dec 12 17:26:09.012733 kernel: acpiphp: Slot [29] registered Dec 12 17:26:09.012752 kernel: acpiphp: Slot [30] registered Dec 12 17:26:09.012771 kernel: acpiphp: Slot [31] registered Dec 12 17:26:09.012794 kernel: PCI host bridge to bus 0000:00 Dec 12 17:26:09.013101 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 12 17:26:09.013353 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 12 17:26:09.013630 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 12 17:26:09.013873 kernel: pci_bus 0000:00: root bus resource [bus 00] Dec 12 17:26:09.014181 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Dec 12 17:26:09.014474 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Dec 12 17:26:09.025666 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Dec 12 17:26:09.025995 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Dec 12 17:26:09.026267 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Dec 12 17:26:09.026600 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 12 17:26:09.026918 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Dec 12 17:26:09.027206 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Dec 12 17:26:09.027600 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Dec 12 17:26:09.027934 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Dec 12 17:26:09.028230 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 12 17:26:09.028549 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 12 17:26:09.028847 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 12 17:26:09.029118 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 12 17:26:09.029151 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 12 17:26:09.029173 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 12 17:26:09.029194 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 12 17:26:09.029214 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 12 17:26:09.029246 kernel: iommu: Default domain type: Translated Dec 12 17:26:09.029268 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 12 17:26:09.029289 kernel: efivars: Registered efivars operations Dec 12 17:26:09.029309 kernel: vgaarb: loaded Dec 12 17:26:09.029329 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 12 17:26:09.029349 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 17:26:09.029372 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 17:26:09.029393 kernel: pnp: PnP ACPI init Dec 12 17:26:09.029810 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 12 17:26:09.029850 kernel: pnp: PnP ACPI: found 1 devices Dec 12 17:26:09.029870 kernel: NET: Registered PF_INET protocol family Dec 12 17:26:09.029891 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 17:26:09.029911 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 17:26:09.029931 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 17:26:09.029964 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:26:09.029986 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 17:26:09.030005 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 17:26:09.030025 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:26:09.030045 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:26:09.030065 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 17:26:09.030085 kernel: PCI: CLS 0 bytes, default 64 Dec 12 17:26:09.030111 kernel: kvm [1]: HYP mode not available Dec 12 17:26:09.030132 kernel: Initialise system trusted keyrings Dec 12 17:26:09.030152 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 17:26:09.030171 kernel: Key type asymmetric registered Dec 12 17:26:09.030191 kernel: Asymmetric key parser 'x509' registered Dec 12 17:26:09.030211 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 12 17:26:09.030231 kernel: io scheduler mq-deadline registered Dec 12 17:26:09.030255 kernel: io scheduler kyber registered Dec 12 17:26:09.030275 kernel: io scheduler bfq registered Dec 12 17:26:09.030682 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 12 17:26:09.030725 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 12 17:26:09.030746 kernel: ACPI: button: Power Button [PWRB] Dec 12 17:26:09.030767 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 12 17:26:09.030787 kernel: ACPI: button: Sleep Button [SLPB] Dec 12 17:26:09.030819 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 17:26:09.030840 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 12 17:26:09.031171 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 12 17:26:09.031208 kernel: printk: legacy console [ttyS0] disabled Dec 12 17:26:09.031228 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 12 17:26:09.031249 kernel: printk: legacy console [ttyS0] enabled Dec 12 17:26:09.031268 kernel: printk: legacy bootconsole [uart0] disabled Dec 12 17:26:09.031300 kernel: thunder_xcv, ver 1.0 Dec 12 17:26:09.031322 kernel: thunder_bgx, ver 1.0 Dec 12 17:26:09.031342 kernel: nicpf, ver 1.0 Dec 12 17:26:09.031363 kernel: nicvf, ver 1.0 Dec 12 17:26:09.032076 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 12 17:26:09.032405 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-12T17:26:05 UTC (1765560365) Dec 12 17:26:09.032442 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 17:26:09.032478 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Dec 12 17:26:09.032577 kernel: NET: Registered PF_INET6 protocol family Dec 12 17:26:09.032633 kernel: watchdog: NMI not fully supported Dec 12 17:26:09.034606 kernel: watchdog: Hard watchdog permanently disabled Dec 12 17:26:09.034644 kernel: Segment Routing with IPv6 Dec 12 17:26:09.034666 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 17:26:09.034687 kernel: NET: Registered PF_PACKET protocol family Dec 12 17:26:09.034722 kernel: Key type dns_resolver registered Dec 12 17:26:09.034742 kernel: registered taskstats version 1 Dec 12 17:26:09.034762 kernel: Loading compiled-in X.509 certificates Dec 12 17:26:09.034783 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: a5d527f63342895c4af575176d4ae6e640b6d0e9' Dec 12 17:26:09.034803 kernel: Demotion targets for Node 0: null Dec 12 17:26:09.034822 kernel: Key type .fscrypt registered Dec 12 17:26:09.034842 kernel: Key type fscrypt-provisioning registered Dec 12 17:26:09.034868 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 17:26:09.034888 kernel: ima: Allocated hash algorithm: sha1 Dec 12 17:26:09.034907 kernel: ima: No architecture policies found Dec 12 17:26:09.034927 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 12 17:26:09.034947 kernel: clk: Disabling unused clocks Dec 12 17:26:09.034967 kernel: PM: genpd: Disabling unused power domains Dec 12 17:26:09.034988 kernel: Freeing unused kernel memory: 12416K Dec 12 17:26:09.035018 kernel: Run /init as init process Dec 12 17:26:09.035039 kernel: with arguments: Dec 12 17:26:09.035060 kernel: /init Dec 12 17:26:09.035078 kernel: with environment: Dec 12 17:26:09.035098 kernel: HOME=/ Dec 12 17:26:09.035119 kernel: TERM=linux Dec 12 17:26:09.035139 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 12 17:26:09.035468 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 12 17:26:09.036873 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 12 17:26:09.036916 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 17:26:09.036938 kernel: GPT:25804799 != 33554431 Dec 12 17:26:09.036959 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 17:26:09.036981 kernel: GPT:25804799 != 33554431 Dec 12 17:26:09.037014 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 17:26:09.037036 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 12 17:26:09.037056 kernel: SCSI subsystem initialized Dec 12 17:26:09.038551 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 17:26:09.038617 kernel: device-mapper: uevent: version 1.0.3 Dec 12 17:26:09.038640 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 17:26:09.038660 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 17:26:09.038691 kernel: raid6: neonx8 gen() 6496 MB/s Dec 12 17:26:09.038712 kernel: raid6: neonx4 gen() 6483 MB/s Dec 12 17:26:09.038732 kernel: raid6: neonx2 gen() 5446 MB/s Dec 12 17:26:09.038752 kernel: raid6: neonx1 gen() 3936 MB/s Dec 12 17:26:09.038772 kernel: raid6: int64x8 gen() 3616 MB/s Dec 12 17:26:09.038793 kernel: raid6: int64x4 gen() 3679 MB/s Dec 12 17:26:09.038813 kernel: raid6: int64x2 gen() 3566 MB/s Dec 12 17:26:09.038833 kernel: raid6: int64x1 gen() 2708 MB/s Dec 12 17:26:09.038858 kernel: raid6: using algorithm neonx8 gen() 6496 MB/s Dec 12 17:26:09.038879 kernel: raid6: .... xor() 4729 MB/s, rmw enabled Dec 12 17:26:09.038900 kernel: raid6: using neon recovery algorithm Dec 12 17:26:09.038920 kernel: xor: measuring software checksum speed Dec 12 17:26:09.038940 kernel: 8regs : 12954 MB/sec Dec 12 17:26:09.038960 kernel: 32regs : 12448 MB/sec Dec 12 17:26:09.038980 kernel: arm64_neon : 8841 MB/sec Dec 12 17:26:09.039004 kernel: xor: using function: 8regs (12954 MB/sec) Dec 12 17:26:09.039024 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 17:26:09.039044 kernel: BTRFS: device fsid d09b8b5a-fb5f-4a17-94ef-0a452535b2bc devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (222) Dec 12 17:26:09.039064 kernel: BTRFS info (device dm-0): first mount of filesystem d09b8b5a-fb5f-4a17-94ef-0a452535b2bc Dec 12 17:26:09.039083 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:26:09.039104 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 12 17:26:09.039125 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 17:26:09.039151 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 17:26:09.039173 kernel: loop: module loaded Dec 12 17:26:09.039192 kernel: loop0: detected capacity change from 0 to 91480 Dec 12 17:26:09.039212 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 17:26:09.039234 systemd[1]: Successfully made /usr/ read-only. Dec 12 17:26:09.039262 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:26:09.039290 systemd[1]: Detected virtualization amazon. Dec 12 17:26:09.039311 systemd[1]: Detected architecture arm64. Dec 12 17:26:09.039332 systemd[1]: Running in initrd. Dec 12 17:26:09.039353 systemd[1]: No hostname configured, using default hostname. Dec 12 17:26:09.039375 systemd[1]: Hostname set to . Dec 12 17:26:09.039422 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 17:26:09.039454 systemd[1]: Queued start job for default target initrd.target. Dec 12 17:26:09.039476 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:26:09.040659 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:26:09.040703 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:26:09.040727 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 17:26:09.040752 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:26:09.040803 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 17:26:09.040825 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 17:26:09.040847 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:26:09.040869 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:26:09.040890 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:26:09.040917 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:26:09.040938 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:26:09.040959 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:26:09.040981 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:26:09.041002 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:26:09.041024 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:26:09.041045 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:26:09.041071 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 17:26:09.041093 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 17:26:09.041115 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:26:09.041139 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:26:09.041162 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:26:09.041184 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:26:09.041208 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 17:26:09.041237 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 17:26:09.041259 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:26:09.041281 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 17:26:09.041306 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 17:26:09.041328 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 17:26:09.041351 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:26:09.041379 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:26:09.041403 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:26:09.041426 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 17:26:09.041455 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:26:09.041478 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 17:26:09.041538 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:26:09.041646 systemd-journald[360]: Collecting audit messages is enabled. Dec 12 17:26:09.041706 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:26:09.041731 kernel: audit: type=1130 audit(1765560369.009:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.041756 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:26:09.041784 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 17:26:09.041806 kernel: Bridge firewalling registered Dec 12 17:26:09.041828 systemd-journald[360]: Journal started Dec 12 17:26:09.041869 systemd-journald[360]: Runtime Journal (/run/log/journal/ec2d841223dca7af489b42fd872c7288) is 8M, max 75.3M, 67.3M free. Dec 12 17:26:09.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.042157 systemd-modules-load[361]: Inserted module 'br_netfilter' Dec 12 17:26:09.056270 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:26:09.056344 kernel: audit: type=1130 audit(1765560369.050:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.059698 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:26:09.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.075895 kernel: audit: type=1130 audit(1765560369.062:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.077190 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:26:09.095768 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:26:09.106647 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:26:09.117907 kernel: audit: type=1130 audit(1765560369.105:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.118604 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:26:09.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.131542 kernel: audit: type=1130 audit(1765560369.123:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.135915 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 17:26:09.147886 systemd-tmpfiles[381]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 17:26:09.164369 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:26:09.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.170810 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:26:09.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.180583 kernel: audit: type=1130 audit(1765560369.164:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.182115 kernel: audit: type=1130 audit(1765560369.174:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.186433 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:26:09.183000 audit: BPF prog-id=6 op=LOAD Dec 12 17:26:09.196124 kernel: audit: type=1334 audit(1765560369.183:9): prog-id=6 op=LOAD Dec 12 17:26:09.218303 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:26:09.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.230948 kernel: audit: type=1130 audit(1765560369.220:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.230793 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 17:26:09.282780 dracut-cmdline[404]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 12 17:26:09.366620 systemd-resolved[391]: Positive Trust Anchors: Dec 12 17:26:09.367939 systemd-resolved[391]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:26:09.367958 systemd-resolved[391]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 17:26:09.369103 systemd-resolved[391]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:26:09.560553 kernel: Loading iSCSI transport class v2.0-870. Dec 12 17:26:09.610562 kernel: iscsi: registered transport (tcp) Dec 12 17:26:09.663532 kernel: random: crng init done Dec 12 17:26:09.664274 systemd-resolved[391]: Defaulting to hostname 'linux'. Dec 12 17:26:09.668611 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:26:09.681062 kernel: iscsi: registered transport (qla4xxx) Dec 12 17:26:09.681115 kernel: audit: type=1130 audit(1765560369.669:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.681145 kernel: QLogic iSCSI HBA Driver Dec 12 17:26:09.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.671615 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:26:09.730826 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:26:09.778541 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:26:09.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.787724 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:26:09.887720 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 17:26:09.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.893865 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 17:26:09.902638 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 17:26:09.972447 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:26:09.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.978000 audit: BPF prog-id=7 op=LOAD Dec 12 17:26:09.978000 audit: BPF prog-id=8 op=LOAD Dec 12 17:26:09.981431 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:26:10.052041 systemd-udevd[643]: Using default interface naming scheme 'v257'. Dec 12 17:26:10.075124 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:26:10.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:10.087257 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 17:26:10.145065 dracut-pre-trigger[707]: rd.md=0: removing MD RAID activation Dec 12 17:26:10.153181 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:26:10.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:10.159000 audit: BPF prog-id=9 op=LOAD Dec 12 17:26:10.162853 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:26:10.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:10.219629 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:26:10.225602 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:26:10.275887 systemd-networkd[752]: lo: Link UP Dec 12 17:26:10.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:10.275908 systemd-networkd[752]: lo: Gained carrier Dec 12 17:26:10.277239 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:26:10.281317 systemd[1]: Reached target network.target - Network. Dec 12 17:26:10.393865 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:26:10.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:10.406848 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 17:26:10.645942 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:26:10.651603 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:26:10.659647 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:26:10.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:10.669094 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:26:10.689559 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 12 17:26:10.689642 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 12 17:26:10.690068 kernel: nvme nvme0: using unchecked data buffer Dec 12 17:26:10.705291 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 12 17:26:10.705773 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 12 17:26:10.725578 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:3a:ca:5b:6c:11 Dec 12 17:26:10.729905 (udev-worker)[807]: Network interface NamePolicy= disabled on kernel command line. Dec 12 17:26:10.754657 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:26:10.754964 systemd-networkd[752]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:26:10.755059 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:26:10.771652 systemd-networkd[752]: eth0: Link UP Dec 12 17:26:10.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:10.772097 systemd-networkd[752]: eth0: Gained carrier Dec 12 17:26:10.772123 systemd-networkd[752]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:26:10.800640 systemd-networkd[752]: eth0: DHCPv4 address 172.31.17.228/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 12 17:26:10.883274 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 12 17:26:10.891002 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 17:26:10.947657 disk-uuid[880]: Primary Header is updated. Dec 12 17:26:10.947657 disk-uuid[880]: Secondary Entries is updated. Dec 12 17:26:10.947657 disk-uuid[880]: Secondary Header is updated. Dec 12 17:26:11.022022 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 12 17:26:11.058208 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 12 17:26:11.132412 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 12 17:26:11.283649 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 17:26:11.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:11.288267 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:26:11.292048 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:26:11.297601 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:26:11.305245 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 17:26:11.352206 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:26:11.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.058373 disk-uuid[885]: Warning: The kernel is still using the old partition table. Dec 12 17:26:12.058373 disk-uuid[885]: The new table will be used at the next reboot or after you Dec 12 17:26:12.058373 disk-uuid[885]: run partprobe(8) or kpartx(8) Dec 12 17:26:12.058373 disk-uuid[885]: The operation has completed successfully. Dec 12 17:26:12.082431 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 17:26:12.084827 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 17:26:12.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.092842 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 17:26:12.156037 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1011) Dec 12 17:26:12.156111 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:26:12.160584 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:26:12.192947 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 12 17:26:12.193049 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 12 17:26:12.204602 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:26:12.206613 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 17:26:12.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.210947 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 17:26:12.232744 systemd-networkd[752]: eth0: Gained IPv6LL Dec 12 17:26:13.558088 ignition[1030]: Ignition 2.22.0 Dec 12 17:26:13.558120 ignition[1030]: Stage: fetch-offline Dec 12 17:26:13.560221 ignition[1030]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:26:13.560670 ignition[1030]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 17:26:13.561684 ignition[1030]: Ignition finished successfully Dec 12 17:26:13.570181 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:26:13.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.577616 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 17:26:13.628100 ignition[1038]: Ignition 2.22.0 Dec 12 17:26:13.628137 ignition[1038]: Stage: fetch Dec 12 17:26:13.629578 ignition[1038]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:26:13.629963 ignition[1038]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 17:26:13.630140 ignition[1038]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 17:26:13.646089 ignition[1038]: PUT result: OK Dec 12 17:26:13.649440 ignition[1038]: parsed url from cmdline: "" Dec 12 17:26:13.649464 ignition[1038]: no config URL provided Dec 12 17:26:13.649480 ignition[1038]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:26:13.649548 ignition[1038]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:26:13.649581 ignition[1038]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 17:26:13.652196 ignition[1038]: PUT result: OK Dec 12 17:26:13.652276 ignition[1038]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 12 17:26:13.658665 ignition[1038]: GET result: OK Dec 12 17:26:13.658836 ignition[1038]: parsing config with SHA512: fb6a35fd3dba7e8131225d5e8ae574b6f0890d2e1f7770f5eda46ff1f89d08bcaf15511d4606859cc9179844a5a07bed2e366151489d445825dd9200c8b9a2d7 Dec 12 17:26:13.671439 unknown[1038]: fetched base config from "system" Dec 12 17:26:13.671463 unknown[1038]: fetched base config from "system" Dec 12 17:26:13.672178 ignition[1038]: fetch: fetch complete Dec 12 17:26:13.671478 unknown[1038]: fetched user config from "aws" Dec 12 17:26:13.672191 ignition[1038]: fetch: fetch passed Dec 12 17:26:13.672294 ignition[1038]: Ignition finished successfully Dec 12 17:26:13.686703 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 17:26:13.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.693768 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 17:26:13.757949 ignition[1045]: Ignition 2.22.0 Dec 12 17:26:13.757982 ignition[1045]: Stage: kargs Dec 12 17:26:13.758646 ignition[1045]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:26:13.758674 ignition[1045]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 17:26:13.758832 ignition[1045]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 17:26:13.761659 ignition[1045]: PUT result: OK Dec 12 17:26:13.776074 ignition[1045]: kargs: kargs passed Dec 12 17:26:13.776424 ignition[1045]: Ignition finished successfully Dec 12 17:26:13.782950 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 17:26:13.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.788021 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 17:26:13.855423 ignition[1052]: Ignition 2.22.0 Dec 12 17:26:13.855454 ignition[1052]: Stage: disks Dec 12 17:26:13.856332 ignition[1052]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:26:13.856356 ignition[1052]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 17:26:13.856537 ignition[1052]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 17:26:13.858983 ignition[1052]: PUT result: OK Dec 12 17:26:13.870150 ignition[1052]: disks: disks passed Dec 12 17:26:13.870307 ignition[1052]: Ignition finished successfully Dec 12 17:26:13.873862 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 17:26:13.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.878759 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 17:26:13.885219 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 17:26:13.890556 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:26:13.894915 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:26:13.899520 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:26:13.904934 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 17:26:14.023879 systemd-fsck[1061]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 12 17:26:14.030638 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 17:26:14.043386 kernel: kauditd_printk_skb: 22 callbacks suppressed Dec 12 17:26:14.043445 kernel: audit: type=1130 audit(1765560374.034:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.045400 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 17:26:14.298815 kernel: EXT4-fs (nvme0n1p9): mounted filesystem fa93fc03-2e23-46f9-9013-1e396e3304a8 r/w with ordered data mode. Quota mode: none. Dec 12 17:26:14.300024 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 17:26:14.304269 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 17:26:14.362895 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:26:14.367741 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 17:26:14.374257 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 17:26:14.375474 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 17:26:14.375566 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:26:14.404403 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 17:26:14.411712 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 17:26:14.431556 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1080) Dec 12 17:26:14.437356 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:26:14.437471 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:26:14.445285 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 12 17:26:14.445380 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 12 17:26:14.448201 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:26:15.615365 initrd-setup-root[1104]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 17:26:15.627105 initrd-setup-root[1111]: cut: /sysroot/etc/group: No such file or directory Dec 12 17:26:15.636964 initrd-setup-root[1118]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 17:26:15.644430 initrd-setup-root[1125]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 17:26:16.546163 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 17:26:16.557863 kernel: audit: type=1130 audit(1765560376.548:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:16.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:16.551565 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 17:26:16.564391 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 17:26:16.596098 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 17:26:16.603595 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:26:16.635855 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 17:26:16.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:16.645524 kernel: audit: type=1130 audit(1765560376.639:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:16.663251 ignition[1193]: INFO : Ignition 2.22.0 Dec 12 17:26:16.663251 ignition[1193]: INFO : Stage: mount Dec 12 17:26:16.667378 ignition[1193]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:26:16.667378 ignition[1193]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 17:26:16.667378 ignition[1193]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 17:26:16.675865 ignition[1193]: INFO : PUT result: OK Dec 12 17:26:16.680071 ignition[1193]: INFO : mount: mount passed Dec 12 17:26:16.680071 ignition[1193]: INFO : Ignition finished successfully Dec 12 17:26:16.684402 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 17:26:16.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:16.690939 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 17:26:16.699681 kernel: audit: type=1130 audit(1765560376.687:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:16.719540 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:26:16.753537 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1204) Dec 12 17:26:16.758410 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:26:16.758463 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:26:16.766167 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 12 17:26:16.766238 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 12 17:26:16.769454 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:26:16.824290 ignition[1221]: INFO : Ignition 2.22.0 Dec 12 17:26:16.826390 ignition[1221]: INFO : Stage: files Dec 12 17:26:16.826390 ignition[1221]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:26:16.826390 ignition[1221]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 17:26:16.826390 ignition[1221]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 17:26:16.835686 ignition[1221]: INFO : PUT result: OK Dec 12 17:26:16.840599 ignition[1221]: DEBUG : files: compiled without relabeling support, skipping Dec 12 17:26:16.844053 ignition[1221]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 17:26:16.844053 ignition[1221]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 17:26:16.889331 ignition[1221]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 17:26:16.894110 ignition[1221]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 17:26:16.897793 unknown[1221]: wrote ssh authorized keys file for user: core Dec 12 17:26:16.900261 ignition[1221]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 17:26:16.903232 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:26:16.903232 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 12 17:26:16.984254 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 17:26:17.111942 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:26:17.111942 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 17:26:17.120484 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 17:26:17.120484 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:26:17.120484 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:26:17.120484 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:26:17.120484 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:26:17.120484 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:26:17.120484 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:26:17.151746 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:26:17.155740 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:26:17.155740 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:26:17.165735 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:26:17.171222 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:26:17.171222 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Dec 12 17:26:17.638007 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 17:26:18.052124 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:26:18.052124 ignition[1221]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 17:26:18.085071 ignition[1221]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:26:18.092538 ignition[1221]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:26:18.097019 ignition[1221]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 17:26:18.097019 ignition[1221]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 12 17:26:18.097019 ignition[1221]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 17:26:18.097019 ignition[1221]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:26:18.097019 ignition[1221]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:26:18.097019 ignition[1221]: INFO : files: files passed Dec 12 17:26:18.097019 ignition[1221]: INFO : Ignition finished successfully Dec 12 17:26:18.119632 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 17:26:18.128762 kernel: audit: type=1130 audit(1765560378.120:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.124753 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 17:26:18.135904 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 17:26:18.152679 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 17:26:18.155016 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 17:26:18.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.168826 kernel: audit: type=1130 audit(1765560378.158:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.168919 kernel: audit: type=1131 audit(1765560378.158:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.205162 initrd-setup-root-after-ignition[1252]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:26:18.205162 initrd-setup-root-after-ignition[1252]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:26:18.213541 initrd-setup-root-after-ignition[1256]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:26:18.217917 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:26:18.230887 kernel: audit: type=1130 audit(1765560378.220:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.222432 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 17:26:18.238320 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 17:26:18.333283 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 17:26:18.334572 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 17:26:18.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.339632 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 17:26:18.354728 kernel: audit: type=1130 audit(1765560378.337:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.354773 kernel: audit: type=1131 audit(1765560378.337:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.354803 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 17:26:18.357706 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 17:26:18.363247 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 17:26:18.405599 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:26:18.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.412376 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 17:26:18.451187 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:26:18.452422 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:26:18.457560 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:26:18.462344 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 17:26:18.469854 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 17:26:18.470641 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:26:18.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.477940 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 17:26:18.482795 systemd[1]: Stopped target basic.target - Basic System. Dec 12 17:26:18.487110 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 17:26:18.492033 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:26:18.496091 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 17:26:18.500420 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:26:18.508667 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 17:26:18.511383 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:26:18.519195 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 17:26:18.521937 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 17:26:18.528725 systemd[1]: Stopped target swap.target - Swaps. Dec 12 17:26:18.532761 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 17:26:18.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.533032 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:26:18.538568 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:26:18.546418 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:26:18.550305 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 17:26:18.555566 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:26:18.558888 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 17:26:18.559192 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 17:26:18.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.570933 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 17:26:18.574018 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:26:18.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.580240 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 17:26:18.580774 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 17:26:18.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.590717 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 17:26:18.600206 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 17:26:18.610217 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 17:26:18.615768 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:26:18.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.619374 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 17:26:18.623486 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:26:18.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.642246 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 17:26:18.644403 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:26:18.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.666452 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 17:26:18.678664 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 17:26:18.685607 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 17:26:18.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.706075 ignition[1276]: INFO : Ignition 2.22.0 Dec 12 17:26:18.708619 ignition[1276]: INFO : Stage: umount Dec 12 17:26:18.708619 ignition[1276]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:26:18.708619 ignition[1276]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 17:26:18.708619 ignition[1276]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 17:26:18.725078 ignition[1276]: INFO : PUT result: OK Dec 12 17:26:18.731389 ignition[1276]: INFO : umount: umount passed Dec 12 17:26:18.733546 ignition[1276]: INFO : Ignition finished successfully Dec 12 17:26:18.739796 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 17:26:18.743668 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 17:26:18.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.748653 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 17:26:18.748801 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 17:26:18.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.756664 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 17:26:18.756793 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 17:26:18.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.761583 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 17:26:18.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.761728 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 17:26:18.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.766467 systemd[1]: Stopped target network.target - Network. Dec 12 17:26:18.769334 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 17:26:18.769473 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:26:18.775134 systemd[1]: Stopped target paths.target - Path Units. Dec 12 17:26:18.778945 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 17:26:18.781308 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:26:18.792447 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 17:26:18.801701 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 17:26:18.812735 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 17:26:18.812840 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:26:18.819098 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 17:26:18.819200 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:26:18.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.822289 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 12 17:26:18.822359 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:26:18.827710 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 17:26:18.827874 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 17:26:18.831105 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 17:26:18.831235 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 17:26:18.832835 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 17:26:18.840846 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 17:26:18.863155 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 17:26:18.863454 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 17:26:18.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.871157 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 17:26:18.871984 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 17:26:18.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.881998 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 17:26:18.884631 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 17:26:18.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.894131 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 17:26:18.895616 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 17:26:18.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.904000 audit: BPF prog-id=9 op=UNLOAD Dec 12 17:26:18.906311 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 17:26:18.904000 audit: BPF prog-id=6 op=UNLOAD Dec 12 17:26:18.911711 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 17:26:18.911806 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:26:18.918809 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 17:26:18.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.921584 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 17:26:18.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.921707 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:26:18.925768 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:26:18.925875 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:26:18.930614 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 17:26:18.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.930733 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 17:26:18.936865 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:26:18.973209 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 17:26:18.976003 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:26:18.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.983135 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 17:26:18.983238 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 17:26:18.989383 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 17:26:18.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.992795 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:26:18.997838 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 17:26:19.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:18.997962 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:26:19.000953 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 17:26:19.001096 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 17:26:19.004223 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 17:26:19.004370 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:26:19.019644 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 17:26:19.038082 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 17:26:19.038450 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:26:19.053239 kernel: kauditd_printk_skb: 31 callbacks suppressed Dec 12 17:26:19.053330 kernel: audit: type=1131 audit(1765560379.045:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.053625 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 17:26:19.065653 kernel: audit: type=1131 audit(1765560379.060:76): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.053759 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:26:19.087345 kernel: audit: type=1131 audit(1765560379.070:77): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.087390 kernel: audit: type=1131 audit(1765560379.085:78): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.064437 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 12 17:26:19.102011 kernel: audit: type=1131 audit(1765560379.095:79): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.064582 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:26:19.120540 kernel: audit: type=1130 audit(1765560379.106:80): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.120588 kernel: audit: type=1131 audit(1765560379.106:81): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.072048 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 17:26:19.072171 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:26:19.087085 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:26:19.087217 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:26:19.101057 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 17:26:19.101580 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 17:26:19.139987 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 17:26:19.142572 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 17:26:19.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.151080 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 17:26:19.164922 kernel: audit: type=1131 audit(1765560379.149:82): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.165975 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 17:26:19.210339 systemd[1]: Switching root. Dec 12 17:26:19.284254 systemd-journald[360]: Journal stopped Dec 12 17:26:22.975917 systemd-journald[360]: Received SIGTERM from PID 1 (systemd). Dec 12 17:26:22.976054 kernel: audit: type=1335 audit(1765560379.289:83): pid=360 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=kernel comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" nl-mcgrp=1 op=disconnect res=1 Dec 12 17:26:22.976111 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 17:26:22.976146 kernel: SELinux: policy capability open_perms=1 Dec 12 17:26:22.976178 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 17:26:22.976211 kernel: SELinux: policy capability always_check_network=0 Dec 12 17:26:22.976252 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 17:26:22.976292 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 17:26:22.976327 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 17:26:22.976359 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 17:26:22.976388 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 17:26:22.976419 kernel: audit: type=1403 audit(1765560379.860:84): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 17:26:22.976450 systemd[1]: Successfully loaded SELinux policy in 173.398ms. Dec 12 17:26:22.985194 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.337ms. Dec 12 17:26:22.985268 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:26:22.985311 systemd[1]: Detected virtualization amazon. Dec 12 17:26:22.985347 systemd[1]: Detected architecture arm64. Dec 12 17:26:22.985379 systemd[1]: Detected first boot. Dec 12 17:26:22.985419 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 17:26:22.985456 zram_generator::config[1319]: No configuration found. Dec 12 17:26:22.985523 kernel: NET: Registered PF_VSOCK protocol family Dec 12 17:26:22.985573 systemd[1]: Populated /etc with preset unit settings. Dec 12 17:26:22.985605 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 17:26:22.985638 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 17:26:22.985674 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 17:26:22.985706 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 17:26:22.985739 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 17:26:22.985773 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 17:26:22.985811 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 17:26:22.985844 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 17:26:22.985875 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 17:26:22.985911 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 17:26:22.985942 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 17:26:22.985973 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:26:22.986005 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:26:22.986039 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 17:26:22.986072 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 17:26:22.986103 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 17:26:22.986135 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:26:22.986167 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 17:26:22.986198 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:26:22.986235 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:26:22.986269 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 17:26:22.986299 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 17:26:22.986334 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 17:26:22.986364 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 17:26:22.986398 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:26:22.986432 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:26:22.986466 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 12 17:26:22.988651 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:26:22.988721 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:26:22.988754 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 17:26:22.988787 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 17:26:22.988821 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 17:26:22.988852 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:26:22.988890 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 12 17:26:22.988920 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:26:22.988949 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 12 17:26:22.988980 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 12 17:26:22.989010 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:26:22.989045 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:26:22.989079 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 17:26:22.989119 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 17:26:22.989154 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 17:26:22.989186 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 17:26:22.989218 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 17:26:22.989249 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 17:26:22.989278 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 17:26:22.989310 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 17:26:22.989344 systemd[1]: Reached target machines.target - Containers. Dec 12 17:26:22.989373 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 17:26:22.989404 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:26:22.989437 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:26:22.989469 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:26:22.997546 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:26:22.997612 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:26:22.997656 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:26:22.997690 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:26:22.997720 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:26:22.997750 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 17:26:22.997780 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 17:26:22.997809 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 17:26:22.997843 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 17:26:22.997878 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 17:26:22.997914 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:26:22.997944 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:26:22.997978 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:26:22.998009 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:26:22.998040 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 17:26:22.998074 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 17:26:22.998104 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:26:22.998137 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 17:26:22.998168 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 17:26:22.998198 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 17:26:22.998234 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 17:26:22.998265 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 17:26:22.998296 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 17:26:22.998330 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:26:22.998362 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:26:22.998395 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:26:22.998431 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:26:22.998461 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:26:22.998490 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:26:22.998559 kernel: fuse: init (API version 7.41) Dec 12 17:26:22.998591 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:26:22.998632 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:26:22.998662 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:26:22.998691 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:26:22.998723 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:26:22.998754 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:26:22.998783 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 17:26:22.998812 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 12 17:26:22.998846 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 17:26:22.998880 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 17:26:22.998913 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 17:26:22.998943 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:26:22.998979 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 17:26:22.999009 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:26:22.999039 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:26:22.999119 systemd-journald[1396]: Collecting audit messages is enabled. Dec 12 17:26:22.999174 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 17:26:22.999211 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:26:22.999242 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 17:26:22.999273 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:26:22.999305 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:26:22.999337 systemd-journald[1396]: Journal started Dec 12 17:26:22.999386 systemd-journald[1396]: Runtime Journal (/run/log/journal/ec2d841223dca7af489b42fd872c7288) is 8M, max 75.3M, 67.3M free. Dec 12 17:26:22.414000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 12 17:26:23.020622 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 17:26:23.020705 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:26:23.020746 kernel: ACPI: bus type drm_connector registered Dec 12 17:26:22.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:22.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:22.684000 audit: BPF prog-id=14 op=UNLOAD Dec 12 17:26:22.684000 audit: BPF prog-id=13 op=UNLOAD Dec 12 17:26:22.687000 audit: BPF prog-id=15 op=LOAD Dec 12 17:26:22.689000 audit: BPF prog-id=16 op=LOAD Dec 12 17:26:22.689000 audit: BPF prog-id=17 op=LOAD Dec 12 17:26:22.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:22.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:22.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:22.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:22.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:22.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:22.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:22.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:22.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:22.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:22.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:22.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:22.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:22.956000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 12 17:26:22.956000 audit[1396]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd6d1f470 a2=4000 a3=0 items=0 ppid=1 pid=1396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.956000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 12 17:26:22.240357 systemd[1]: Queued start job for default target multi-user.target. Dec 12 17:26:22.253721 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 12 17:26:22.254720 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 17:26:23.042443 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:26:23.042560 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:26:23.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:23.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:23.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:23.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:23.048224 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:26:23.049815 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:26:23.062621 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 17:26:23.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:23.068414 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 17:26:23.072801 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 17:26:23.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:23.092272 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 17:26:23.106531 kernel: loop1: detected capacity change from 0 to 211168 Dec 12 17:26:23.131057 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 17:26:23.135420 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:26:23.141248 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 17:26:23.147899 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 17:26:23.188585 systemd-journald[1396]: Time spent on flushing to /var/log/journal/ec2d841223dca7af489b42fd872c7288 is 90.274ms for 1063 entries. Dec 12 17:26:23.188585 systemd-journald[1396]: System Journal (/var/log/journal/ec2d841223dca7af489b42fd872c7288) is 8M, max 588.1M, 580.1M free. Dec 12 17:26:23.336528 systemd-journald[1396]: Received client request to flush runtime journal. Dec 12 17:26:23.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:23.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:23.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:23.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:23.209219 systemd-tmpfiles[1423]: ACLs are not supported, ignoring. Dec 12 17:26:23.209245 systemd-tmpfiles[1423]: ACLs are not supported, ignoring. Dec 12 17:26:23.221902 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:26:23.247629 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:26:23.257453 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 17:26:23.266384 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 17:26:23.283672 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 17:26:23.291827 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 17:26:23.341615 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 17:26:23.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:23.353768 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:26:23.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:23.394802 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 17:26:23.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:23.400000 audit: BPF prog-id=18 op=LOAD Dec 12 17:26:23.400000 audit: BPF prog-id=19 op=LOAD Dec 12 17:26:23.400000 audit: BPF prog-id=20 op=LOAD Dec 12 17:26:23.404827 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 12 17:26:23.408000 audit: BPF prog-id=21 op=LOAD Dec 12 17:26:23.413106 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:26:23.417742 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:26:23.432528 kernel: loop2: detected capacity change from 0 to 100192 Dec 12 17:26:23.464067 systemd-tmpfiles[1476]: ACLs are not supported, ignoring. Dec 12 17:26:23.464656 systemd-tmpfiles[1476]: ACLs are not supported, ignoring. Dec 12 17:26:23.473537 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:26:23.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:23.481000 audit: BPF prog-id=22 op=LOAD Dec 12 17:26:23.481000 audit: BPF prog-id=23 op=LOAD Dec 12 17:26:23.482000 audit: BPF prog-id=24 op=LOAD Dec 12 17:26:23.487000 audit: BPF prog-id=25 op=LOAD Dec 12 17:26:23.487000 audit: BPF prog-id=26 op=LOAD Dec 12 17:26:23.487000 audit: BPF prog-id=27 op=LOAD Dec 12 17:26:23.484804 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 12 17:26:23.490766 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 17:26:23.593376 systemd-nsresourced[1479]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 12 17:26:23.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:23.600036 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 12 17:26:23.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:23.655865 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 17:26:23.815930 systemd-oomd[1474]: No swap; memory pressure usage will be degraded Dec 12 17:26:23.817884 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 12 17:26:23.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:23.824556 kernel: loop3: detected capacity change from 0 to 109872 Dec 12 17:26:23.862486 systemd-resolved[1475]: Positive Trust Anchors: Dec 12 17:26:23.862545 systemd-resolved[1475]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:26:23.862555 systemd-resolved[1475]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 17:26:23.862619 systemd-resolved[1475]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:26:23.878232 systemd-resolved[1475]: Defaulting to hostname 'linux'. Dec 12 17:26:23.881411 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:26:23.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:23.884318 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:26:24.105544 kernel: loop4: detected capacity change from 0 to 61504 Dec 12 17:26:24.396109 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 17:26:24.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:24.400288 kernel: kauditd_printk_skb: 66 callbacks suppressed Dec 12 17:26:24.400404 kernel: audit: type=1130 audit(1765560384.397:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:24.407202 kernel: audit: type=1334 audit(1765560384.399:150): prog-id=8 op=UNLOAD Dec 12 17:26:24.399000 audit: BPF prog-id=8 op=UNLOAD Dec 12 17:26:24.399000 audit: BPF prog-id=7 op=UNLOAD Dec 12 17:26:24.409860 kernel: audit: type=1334 audit(1765560384.399:151): prog-id=7 op=UNLOAD Dec 12 17:26:24.409979 kernel: audit: type=1334 audit(1765560384.404:152): prog-id=28 op=LOAD Dec 12 17:26:24.404000 audit: BPF prog-id=28 op=LOAD Dec 12 17:26:24.409817 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:26:24.412675 kernel: audit: type=1334 audit(1765560384.406:153): prog-id=29 op=LOAD Dec 12 17:26:24.406000 audit: BPF prog-id=29 op=LOAD Dec 12 17:26:24.448594 kernel: loop5: detected capacity change from 0 to 211168 Dec 12 17:26:24.474539 kernel: loop6: detected capacity change from 0 to 100192 Dec 12 17:26:24.485051 systemd-udevd[1498]: Using default interface naming scheme 'v257'. Dec 12 17:26:24.496583 kernel: loop7: detected capacity change from 0 to 109872 Dec 12 17:26:24.515562 kernel: loop1: detected capacity change from 0 to 61504 Dec 12 17:26:24.527098 (sd-merge)[1500]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Dec 12 17:26:24.534314 (sd-merge)[1500]: Merged extensions into '/usr'. Dec 12 17:26:24.544604 systemd[1]: Reload requested from client PID 1422 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 17:26:24.544833 systemd[1]: Reloading... Dec 12 17:26:24.702632 zram_generator::config[1533]: No configuration found. Dec 12 17:26:24.890953 (udev-worker)[1587]: Network interface NamePolicy= disabled on kernel command line. Dec 12 17:26:25.352958 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 17:26:25.353735 systemd[1]: Reloading finished in 808 ms. Dec 12 17:26:25.372726 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:26:25.376802 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 17:26:25.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:25.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:25.395029 kernel: audit: type=1130 audit(1765560385.374:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:25.395166 kernel: audit: type=1130 audit(1765560385.383:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:25.441662 systemd[1]: Starting ensure-sysext.service... Dec 12 17:26:25.445000 audit: BPF prog-id=30 op=LOAD Dec 12 17:26:25.452749 kernel: audit: type=1334 audit(1765560385.445:156): prog-id=30 op=LOAD Dec 12 17:26:25.451047 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:26:25.460045 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:26:25.465955 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:26:25.471000 audit: BPF prog-id=31 op=LOAD Dec 12 17:26:25.473000 audit: BPF prog-id=32 op=LOAD Dec 12 17:26:25.480548 kernel: audit: type=1334 audit(1765560385.471:157): prog-id=31 op=LOAD Dec 12 17:26:25.480704 kernel: audit: type=1334 audit(1765560385.473:158): prog-id=32 op=LOAD Dec 12 17:26:25.473000 audit: BPF prog-id=33 op=LOAD Dec 12 17:26:25.474000 audit: BPF prog-id=15 op=UNLOAD Dec 12 17:26:25.474000 audit: BPF prog-id=16 op=UNLOAD Dec 12 17:26:25.474000 audit: BPF prog-id=17 op=UNLOAD Dec 12 17:26:25.475000 audit: BPF prog-id=34 op=LOAD Dec 12 17:26:25.475000 audit: BPF prog-id=21 op=UNLOAD Dec 12 17:26:25.485000 audit: BPF prog-id=35 op=LOAD Dec 12 17:26:25.485000 audit: BPF prog-id=18 op=UNLOAD Dec 12 17:26:25.486000 audit: BPF prog-id=36 op=LOAD Dec 12 17:26:25.486000 audit: BPF prog-id=37 op=LOAD Dec 12 17:26:25.486000 audit: BPF prog-id=19 op=UNLOAD Dec 12 17:26:25.486000 audit: BPF prog-id=20 op=UNLOAD Dec 12 17:26:25.487000 audit: BPF prog-id=38 op=LOAD Dec 12 17:26:25.487000 audit: BPF prog-id=22 op=UNLOAD Dec 12 17:26:25.488000 audit: BPF prog-id=39 op=LOAD Dec 12 17:26:25.488000 audit: BPF prog-id=40 op=LOAD Dec 12 17:26:25.488000 audit: BPF prog-id=23 op=UNLOAD Dec 12 17:26:25.488000 audit: BPF prog-id=24 op=UNLOAD Dec 12 17:26:25.489000 audit: BPF prog-id=41 op=LOAD Dec 12 17:26:25.491000 audit: BPF prog-id=42 op=LOAD Dec 12 17:26:25.491000 audit: BPF prog-id=28 op=UNLOAD Dec 12 17:26:25.491000 audit: BPF prog-id=29 op=UNLOAD Dec 12 17:26:25.492000 audit: BPF prog-id=43 op=LOAD Dec 12 17:26:25.492000 audit: BPF prog-id=25 op=UNLOAD Dec 12 17:26:25.493000 audit: BPF prog-id=44 op=LOAD Dec 12 17:26:25.493000 audit: BPF prog-id=45 op=LOAD Dec 12 17:26:25.493000 audit: BPF prog-id=26 op=UNLOAD Dec 12 17:26:25.493000 audit: BPF prog-id=27 op=UNLOAD Dec 12 17:26:25.536417 systemd[1]: Reload requested from client PID 1627 ('systemctl') (unit ensure-sysext.service)... Dec 12 17:26:25.536457 systemd[1]: Reloading... Dec 12 17:26:25.620308 systemd-tmpfiles[1630]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 17:26:25.620411 systemd-tmpfiles[1630]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 17:26:25.621101 systemd-tmpfiles[1630]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 17:26:25.625219 systemd-tmpfiles[1630]: ACLs are not supported, ignoring. Dec 12 17:26:25.626391 systemd-tmpfiles[1630]: ACLs are not supported, ignoring. Dec 12 17:26:25.646685 systemd-tmpfiles[1630]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:26:25.646714 systemd-tmpfiles[1630]: Skipping /boot Dec 12 17:26:25.673225 systemd-tmpfiles[1630]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:26:25.673259 systemd-tmpfiles[1630]: Skipping /boot Dec 12 17:26:25.913606 zram_generator::config[1706]: No configuration found. Dec 12 17:26:25.929790 systemd-networkd[1628]: lo: Link UP Dec 12 17:26:25.929808 systemd-networkd[1628]: lo: Gained carrier Dec 12 17:26:25.943531 systemd-networkd[1628]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:26:25.944348 systemd-networkd[1628]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:26:25.976589 systemd-networkd[1628]: eth0: Link UP Dec 12 17:26:25.980656 systemd-networkd[1628]: eth0: Gained carrier Dec 12 17:26:25.980698 systemd-networkd[1628]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:26:26.009709 systemd-networkd[1628]: eth0: DHCPv4 address 172.31.17.228/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 12 17:26:26.449896 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 12 17:26:26.453570 systemd[1]: Reloading finished in 915 ms. Dec 12 17:26:26.474550 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:26:26.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:26.478000 audit: BPF prog-id=46 op=LOAD Dec 12 17:26:26.478000 audit: BPF prog-id=47 op=LOAD Dec 12 17:26:26.478000 audit: BPF prog-id=41 op=UNLOAD Dec 12 17:26:26.478000 audit: BPF prog-id=42 op=UNLOAD Dec 12 17:26:26.481000 audit: BPF prog-id=48 op=LOAD Dec 12 17:26:26.481000 audit: BPF prog-id=31 op=UNLOAD Dec 12 17:26:26.481000 audit: BPF prog-id=49 op=LOAD Dec 12 17:26:26.482000 audit: BPF prog-id=50 op=LOAD Dec 12 17:26:26.482000 audit: BPF prog-id=32 op=UNLOAD Dec 12 17:26:26.482000 audit: BPF prog-id=33 op=UNLOAD Dec 12 17:26:26.483000 audit: BPF prog-id=51 op=LOAD Dec 12 17:26:26.483000 audit: BPF prog-id=35 op=UNLOAD Dec 12 17:26:26.483000 audit: BPF prog-id=52 op=LOAD Dec 12 17:26:26.484000 audit: BPF prog-id=53 op=LOAD Dec 12 17:26:26.484000 audit: BPF prog-id=36 op=UNLOAD Dec 12 17:26:26.484000 audit: BPF prog-id=37 op=UNLOAD Dec 12 17:26:26.488000 audit: BPF prog-id=54 op=LOAD Dec 12 17:26:26.488000 audit: BPF prog-id=34 op=UNLOAD Dec 12 17:26:26.489000 audit: BPF prog-id=55 op=LOAD Dec 12 17:26:26.493000 audit: BPF prog-id=30 op=UNLOAD Dec 12 17:26:26.495000 audit: BPF prog-id=56 op=LOAD Dec 12 17:26:26.495000 audit: BPF prog-id=38 op=UNLOAD Dec 12 17:26:26.495000 audit: BPF prog-id=57 op=LOAD Dec 12 17:26:26.496000 audit: BPF prog-id=58 op=LOAD Dec 12 17:26:26.496000 audit: BPF prog-id=39 op=UNLOAD Dec 12 17:26:26.496000 audit: BPF prog-id=40 op=UNLOAD Dec 12 17:26:26.497000 audit: BPF prog-id=59 op=LOAD Dec 12 17:26:26.497000 audit: BPF prog-id=43 op=UNLOAD Dec 12 17:26:26.497000 audit: BPF prog-id=60 op=LOAD Dec 12 17:26:26.497000 audit: BPF prog-id=61 op=LOAD Dec 12 17:26:26.497000 audit: BPF prog-id=44 op=UNLOAD Dec 12 17:26:26.497000 audit: BPF prog-id=45 op=UNLOAD Dec 12 17:26:26.505908 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:26:26.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:26.513171 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:26:26.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:26.584646 systemd[1]: Reached target network.target - Network. Dec 12 17:26:26.589964 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:26:26.600966 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 17:26:26.604043 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:26:26.612000 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:26:26.621166 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:26:26.632432 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:26:26.635973 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:26:26.636446 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:26:26.640001 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 17:26:26.646715 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 17:26:26.651707 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:26:26.661176 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 17:26:26.670179 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 17:26:26.683176 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 17:26:26.696178 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 17:26:26.709224 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:26:26.710985 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:26:26.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:26.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:26.716909 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:26:26.718811 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:26:26.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:26.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:26.723221 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:26:26.730719 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:26:26.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:26.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:26.760841 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:26:26.771093 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:26:26.785166 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:26:26.796270 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:26:26.806301 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:26:26.811898 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:26:26.812355 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:26:26.812676 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:26:26.813083 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 17:26:26.829741 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 17:26:26.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:26.850400 systemd[1]: Finished ensure-sysext.service. Dec 12 17:26:26.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:26.861073 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:26:26.863749 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:26:26.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:26.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:26.872014 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 17:26:26.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:26.888134 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:26:26.888233 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:26:26.894000 audit[1821]: SYSTEM_BOOT pid=1821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 12 17:26:26.913649 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 17:26:26.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:26.925274 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:26:26.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:26.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:26.925956 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:26:26.929819 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:26:26.932075 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:26:26.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:26.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:26.935453 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:26:26.937621 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:26:26.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:26.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:26.947705 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:26:26.952109 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 17:26:26.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:26.997024 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 17:26:26.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:27.084000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 12 17:26:27.084000 audit[1857]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd7f4de60 a2=420 a3=0 items=0 ppid=1805 pid=1857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:27.084000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:26:27.087020 augenrules[1857]: No rules Dec 12 17:26:27.090037 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:26:27.090891 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:26:27.144744 systemd-networkd[1628]: eth0: Gained IPv6LL Dec 12 17:26:27.149713 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 17:26:27.153305 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 17:26:29.809866 ldconfig[1811]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 17:26:29.823616 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 17:26:29.830393 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 17:26:29.860529 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 17:26:29.863784 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:26:29.867977 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 17:26:29.870890 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 17:26:29.874092 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 17:26:29.876810 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 17:26:29.880310 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 12 17:26:29.883556 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 12 17:26:29.886074 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 17:26:29.889972 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 17:26:29.890032 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:26:29.892439 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:26:29.896439 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 17:26:29.901974 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 17:26:29.908358 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 17:26:29.911772 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 17:26:29.914812 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 17:26:29.921117 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 17:26:29.924337 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 17:26:29.928244 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 17:26:29.931012 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:26:29.933882 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:26:29.936403 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:26:29.936674 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:26:29.939848 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 17:26:29.944921 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 17:26:29.951928 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 17:26:29.961847 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 17:26:29.968615 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 17:26:29.973965 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 17:26:29.976661 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 17:26:29.981267 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:26:29.988274 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 17:26:29.998059 systemd[1]: Started ntpd.service - Network Time Service. Dec 12 17:26:30.003964 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 17:26:30.015863 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 17:26:30.031263 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 12 17:26:30.043231 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 17:26:30.051306 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 17:26:30.059741 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 17:26:30.062681 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 17:26:30.063637 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 17:26:30.071945 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 17:26:30.077342 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 17:26:30.113055 jq[1873]: false Dec 12 17:26:30.135104 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 17:26:30.138778 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 17:26:30.139268 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 17:26:30.173699 jq[1886]: true Dec 12 17:26:30.207399 extend-filesystems[1874]: Found /dev/nvme0n1p6 Dec 12 17:26:30.212281 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 17:26:30.213761 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 17:26:30.221355 tar[1895]: linux-arm64/LICENSE Dec 12 17:26:30.221355 tar[1895]: linux-arm64/helm Dec 12 17:26:30.266524 extend-filesystems[1874]: Found /dev/nvme0n1p9 Dec 12 17:26:30.282594 extend-filesystems[1874]: Checking size of /dev/nvme0n1p9 Dec 12 17:26:30.296276 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 17:26:30.297950 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 17:26:30.306046 jq[1906]: true Dec 12 17:26:30.316519 ntpd[1877]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:17 UTC 2025 (1): Starting Dec 12 17:26:30.320672 ntpd[1877]: 12 Dec 17:26:30 ntpd[1877]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:17 UTC 2025 (1): Starting Dec 12 17:26:30.320672 ntpd[1877]: 12 Dec 17:26:30 ntpd[1877]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 12 17:26:30.320672 ntpd[1877]: 12 Dec 17:26:30 ntpd[1877]: ---------------------------------------------------- Dec 12 17:26:30.320672 ntpd[1877]: 12 Dec 17:26:30 ntpd[1877]: ntp-4 is maintained by Network Time Foundation, Dec 12 17:26:30.320672 ntpd[1877]: 12 Dec 17:26:30 ntpd[1877]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 12 17:26:30.320672 ntpd[1877]: 12 Dec 17:26:30 ntpd[1877]: corporation. Support and training for ntp-4 are Dec 12 17:26:30.320672 ntpd[1877]: 12 Dec 17:26:30 ntpd[1877]: available at https://www.nwtime.org/support Dec 12 17:26:30.320672 ntpd[1877]: 12 Dec 17:26:30 ntpd[1877]: ---------------------------------------------------- Dec 12 17:26:30.316641 ntpd[1877]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 12 17:26:30.316662 ntpd[1877]: ---------------------------------------------------- Dec 12 17:26:30.316679 ntpd[1877]: ntp-4 is maintained by Network Time Foundation, Dec 12 17:26:30.316696 ntpd[1877]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 12 17:26:30.316713 ntpd[1877]: corporation. Support and training for ntp-4 are Dec 12 17:26:30.316729 ntpd[1877]: available at https://www.nwtime.org/support Dec 12 17:26:30.316747 ntpd[1877]: ---------------------------------------------------- Dec 12 17:26:30.336308 ntpd[1877]: proto: precision = 0.096 usec (-23) Dec 12 17:26:30.339786 ntpd[1877]: 12 Dec 17:26:30 ntpd[1877]: proto: precision = 0.096 usec (-23) Dec 12 17:26:30.339786 ntpd[1877]: 12 Dec 17:26:30 ntpd[1877]: basedate set to 2025-11-30 Dec 12 17:26:30.339786 ntpd[1877]: 12 Dec 17:26:30 ntpd[1877]: gps base set to 2025-11-30 (week 2395) Dec 12 17:26:30.339786 ntpd[1877]: 12 Dec 17:26:30 ntpd[1877]: Listen and drop on 0 v6wildcard [::]:123 Dec 12 17:26:30.339786 ntpd[1877]: 12 Dec 17:26:30 ntpd[1877]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 12 17:26:30.336786 ntpd[1877]: basedate set to 2025-11-30 Dec 12 17:26:30.336812 ntpd[1877]: gps base set to 2025-11-30 (week 2395) Dec 12 17:26:30.336991 ntpd[1877]: Listen and drop on 0 v6wildcard [::]:123 Dec 12 17:26:30.337047 ntpd[1877]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 12 17:26:30.349914 ntpd[1877]: Listen normally on 2 lo 127.0.0.1:123 Dec 12 17:26:30.351124 ntpd[1877]: 12 Dec 17:26:30 ntpd[1877]: Listen normally on 2 lo 127.0.0.1:123 Dec 12 17:26:30.351124 ntpd[1877]: 12 Dec 17:26:30 ntpd[1877]: Listen normally on 3 eth0 172.31.17.228:123 Dec 12 17:26:30.351124 ntpd[1877]: 12 Dec 17:26:30 ntpd[1877]: Listen normally on 4 lo [::1]:123 Dec 12 17:26:30.351124 ntpd[1877]: 12 Dec 17:26:30 ntpd[1877]: Listen normally on 5 eth0 [fe80::43a:caff:fe5b:6c11%2]:123 Dec 12 17:26:30.351124 ntpd[1877]: 12 Dec 17:26:30 ntpd[1877]: Listening on routing socket on fd #22 for interface updates Dec 12 17:26:30.350042 ntpd[1877]: Listen normally on 3 eth0 172.31.17.228:123 Dec 12 17:26:30.350094 ntpd[1877]: Listen normally on 4 lo [::1]:123 Dec 12 17:26:30.350140 ntpd[1877]: Listen normally on 5 eth0 [fe80::43a:caff:fe5b:6c11%2]:123 Dec 12 17:26:30.350185 ntpd[1877]: Listening on routing socket on fd #22 for interface updates Dec 12 17:26:30.369235 dbus-daemon[1871]: [system] SELinux support is enabled Dec 12 17:26:30.374703 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 17:26:30.378239 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 17:26:30.386963 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 17:26:30.387020 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 17:26:30.390056 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 17:26:30.390103 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 17:26:30.406304 extend-filesystems[1874]: Resized partition /dev/nvme0n1p9 Dec 12 17:26:30.417404 ntpd[1877]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 12 17:26:30.417985 ntpd[1877]: 12 Dec 17:26:30 ntpd[1877]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 12 17:26:30.417985 ntpd[1877]: 12 Dec 17:26:30 ntpd[1877]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 12 17:26:30.417467 ntpd[1877]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 12 17:26:30.426256 extend-filesystems[1946]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 17:26:30.430858 dbus-daemon[1871]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1628 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 12 17:26:30.441522 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Dec 12 17:26:30.468670 dbus-daemon[1871]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 12 17:26:30.474533 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Dec 12 17:26:30.507570 extend-filesystems[1946]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 12 17:26:30.507570 extend-filesystems[1946]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 12 17:26:30.507570 extend-filesystems[1946]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Dec 12 17:26:30.511945 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 12 17:26:30.522801 update_engine[1884]: I20251212 17:26:30.514328 1884 main.cc:92] Flatcar Update Engine starting Dec 12 17:26:30.525296 extend-filesystems[1874]: Resized filesystem in /dev/nvme0n1p9 Dec 12 17:26:30.540666 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 12 17:26:30.544355 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 17:26:30.546110 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 17:26:30.555225 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 12 17:26:30.565177 systemd[1]: Started update-engine.service - Update Engine. Dec 12 17:26:30.573366 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 17:26:30.580994 update_engine[1884]: I20251212 17:26:30.579727 1884 update_check_scheduler.cc:74] Next update check in 4m30s Dec 12 17:26:30.615815 coreos-metadata[1870]: Dec 12 17:26:30.615 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 12 17:26:30.636124 coreos-metadata[1870]: Dec 12 17:26:30.636 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 12 17:26:30.644774 coreos-metadata[1870]: Dec 12 17:26:30.644 INFO Fetch successful Dec 12 17:26:30.644774 coreos-metadata[1870]: Dec 12 17:26:30.644 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 12 17:26:30.652138 coreos-metadata[1870]: Dec 12 17:26:30.652 INFO Fetch successful Dec 12 17:26:30.652138 coreos-metadata[1870]: Dec 12 17:26:30.652 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 12 17:26:30.657648 coreos-metadata[1870]: Dec 12 17:26:30.657 INFO Fetch successful Dec 12 17:26:30.657648 coreos-metadata[1870]: Dec 12 17:26:30.657 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 12 17:26:30.659348 coreos-metadata[1870]: Dec 12 17:26:30.659 INFO Fetch successful Dec 12 17:26:30.659348 coreos-metadata[1870]: Dec 12 17:26:30.659 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 12 17:26:30.666128 coreos-metadata[1870]: Dec 12 17:26:30.665 INFO Fetch failed with 404: resource not found Dec 12 17:26:30.669678 coreos-metadata[1870]: Dec 12 17:26:30.666 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 12 17:26:30.669807 bash[1965]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:26:30.673568 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 17:26:30.677456 coreos-metadata[1870]: Dec 12 17:26:30.677 INFO Fetch successful Dec 12 17:26:30.677456 coreos-metadata[1870]: Dec 12 17:26:30.677 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 12 17:26:30.688989 coreos-metadata[1870]: Dec 12 17:26:30.688 INFO Fetch successful Dec 12 17:26:30.688989 coreos-metadata[1870]: Dec 12 17:26:30.688 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 12 17:26:30.691599 coreos-metadata[1870]: Dec 12 17:26:30.691 INFO Fetch successful Dec 12 17:26:30.692006 coreos-metadata[1870]: Dec 12 17:26:30.691 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 12 17:26:30.694046 systemd[1]: Starting sshkeys.service... Dec 12 17:26:30.706720 coreos-metadata[1870]: Dec 12 17:26:30.706 INFO Fetch successful Dec 12 17:26:30.706720 coreos-metadata[1870]: Dec 12 17:26:30.706 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 12 17:26:30.714865 coreos-metadata[1870]: Dec 12 17:26:30.714 INFO Fetch successful Dec 12 17:26:30.761536 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 17:26:30.777273 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 17:26:30.977397 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 17:26:30.981311 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 17:26:30.996662 systemd-logind[1883]: Watching system buttons on /dev/input/event0 (Power Button) Dec 12 17:26:30.996723 systemd-logind[1883]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 12 17:26:30.997843 systemd-logind[1883]: New seat seat0. Dec 12 17:26:31.001084 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 17:26:31.205471 amazon-ssm-agent[1960]: Initializing new seelog logger Dec 12 17:26:31.213602 amazon-ssm-agent[1960]: New Seelog Logger Creation Complete Dec 12 17:26:31.213602 amazon-ssm-agent[1960]: 2025/12/12 17:26:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:26:31.213602 amazon-ssm-agent[1960]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:26:31.213602 amazon-ssm-agent[1960]: 2025/12/12 17:26:31 processing appconfig overrides Dec 12 17:26:31.222331 amazon-ssm-agent[1960]: 2025-12-12 17:26:31.2220 INFO Proxy environment variables: Dec 12 17:26:31.226529 amazon-ssm-agent[1960]: 2025/12/12 17:26:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:26:31.226529 amazon-ssm-agent[1960]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:26:31.230519 amazon-ssm-agent[1960]: 2025/12/12 17:26:31 processing appconfig overrides Dec 12 17:26:31.230920 amazon-ssm-agent[1960]: 2025/12/12 17:26:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:26:31.230920 amazon-ssm-agent[1960]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:26:31.231132 amazon-ssm-agent[1960]: 2025/12/12 17:26:31 processing appconfig overrides Dec 12 17:26:31.255930 amazon-ssm-agent[1960]: 2025/12/12 17:26:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:26:31.255930 amazon-ssm-agent[1960]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:26:31.256144 amazon-ssm-agent[1960]: 2025/12/12 17:26:31 processing appconfig overrides Dec 12 17:26:31.324172 amazon-ssm-agent[1960]: 2025-12-12 17:26:31.2220 INFO https_proxy: Dec 12 17:26:31.325376 containerd[1911]: time="2025-12-12T17:26:31Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 17:26:31.345635 containerd[1911]: time="2025-12-12T17:26:31.343714808Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 12 17:26:31.402791 containerd[1911]: time="2025-12-12T17:26:31.402717668Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="19.932µs" Dec 12 17:26:31.402972 containerd[1911]: time="2025-12-12T17:26:31.402936380Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 17:26:31.403125 containerd[1911]: time="2025-12-12T17:26:31.403095680Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 17:26:31.403239 containerd[1911]: time="2025-12-12T17:26:31.403209440Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 17:26:31.403813 containerd[1911]: time="2025-12-12T17:26:31.403759016Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 17:26:31.405927 containerd[1911]: time="2025-12-12T17:26:31.405830216Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:26:31.407543 containerd[1911]: time="2025-12-12T17:26:31.407117396Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:26:31.407543 containerd[1911]: time="2025-12-12T17:26:31.407191712Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:26:31.408556 containerd[1911]: time="2025-12-12T17:26:31.407845772Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:26:31.408556 containerd[1911]: time="2025-12-12T17:26:31.407909852Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:26:31.408556 containerd[1911]: time="2025-12-12T17:26:31.407944244Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:26:31.408556 containerd[1911]: time="2025-12-12T17:26:31.407968892Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 17:26:31.408556 containerd[1911]: time="2025-12-12T17:26:31.408355796Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 17:26:31.408556 containerd[1911]: time="2025-12-12T17:26:31.408398144Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 17:26:31.410085 coreos-metadata[1972]: Dec 12 17:26:31.410 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 12 17:26:31.414182 containerd[1911]: time="2025-12-12T17:26:31.414059324Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 17:26:31.417688 coreos-metadata[1972]: Dec 12 17:26:31.417 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 12 17:26:31.419988 containerd[1911]: time="2025-12-12T17:26:31.419790824Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:26:31.420594 coreos-metadata[1972]: Dec 12 17:26:31.420 INFO Fetch successful Dec 12 17:26:31.420710 coreos-metadata[1972]: Dec 12 17:26:31.420 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 12 17:26:31.423425 containerd[1911]: time="2025-12-12T17:26:31.423258464Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:26:31.423425 containerd[1911]: time="2025-12-12T17:26:31.423355412Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 17:26:31.424312 amazon-ssm-agent[1960]: 2025-12-12 17:26:31.2221 INFO http_proxy: Dec 12 17:26:31.425842 coreos-metadata[1972]: Dec 12 17:26:31.425 INFO Fetch successful Dec 12 17:26:31.429229 containerd[1911]: time="2025-12-12T17:26:31.425480456Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 17:26:31.430651 locksmithd[1961]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 17:26:31.431174 containerd[1911]: time="2025-12-12T17:26:31.430788104Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 17:26:31.431174 containerd[1911]: time="2025-12-12T17:26:31.430994396Z" level=info msg="metadata content store policy set" policy=shared Dec 12 17:26:31.437696 unknown[1972]: wrote ssh authorized keys file for user: core Dec 12 17:26:31.450342 containerd[1911]: time="2025-12-12T17:26:31.450257984Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 17:26:31.450458 containerd[1911]: time="2025-12-12T17:26:31.450392240Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 17:26:31.453261 containerd[1911]: time="2025-12-12T17:26:31.450816308Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 17:26:31.453261 containerd[1911]: time="2025-12-12T17:26:31.450892652Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 17:26:31.453261 containerd[1911]: time="2025-12-12T17:26:31.450932228Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 17:26:31.453261 containerd[1911]: time="2025-12-12T17:26:31.450961664Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 17:26:31.453261 containerd[1911]: time="2025-12-12T17:26:31.450990608Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 17:26:31.453261 containerd[1911]: time="2025-12-12T17:26:31.451017884Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 17:26:31.453261 containerd[1911]: time="2025-12-12T17:26:31.451053788Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 17:26:31.453261 containerd[1911]: time="2025-12-12T17:26:31.451085360Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 17:26:31.453261 containerd[1911]: time="2025-12-12T17:26:31.451113764Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 17:26:31.453261 containerd[1911]: time="2025-12-12T17:26:31.451163396Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 17:26:31.453261 containerd[1911]: time="2025-12-12T17:26:31.451192736Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 17:26:31.453261 containerd[1911]: time="2025-12-12T17:26:31.451224056Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 17:26:31.454835 containerd[1911]: time="2025-12-12T17:26:31.454741820Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 17:26:31.454835 containerd[1911]: time="2025-12-12T17:26:31.454824284Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 17:26:31.455016 containerd[1911]: time="2025-12-12T17:26:31.454861676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 17:26:31.455016 containerd[1911]: time="2025-12-12T17:26:31.454895648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 17:26:31.455016 containerd[1911]: time="2025-12-12T17:26:31.454926572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 17:26:31.455016 containerd[1911]: time="2025-12-12T17:26:31.454953680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 17:26:31.455016 containerd[1911]: time="2025-12-12T17:26:31.454984016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 17:26:31.455016 containerd[1911]: time="2025-12-12T17:26:31.455010896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 17:26:31.456896 containerd[1911]: time="2025-12-12T17:26:31.455037404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 17:26:31.456896 containerd[1911]: time="2025-12-12T17:26:31.455065268Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 17:26:31.456896 containerd[1911]: time="2025-12-12T17:26:31.455104712Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 17:26:31.456896 containerd[1911]: time="2025-12-12T17:26:31.455178596Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 17:26:31.456896 containerd[1911]: time="2025-12-12T17:26:31.455265104Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 17:26:31.456896 containerd[1911]: time="2025-12-12T17:26:31.455294828Z" level=info msg="Start snapshots syncer" Dec 12 17:26:31.456896 containerd[1911]: time="2025-12-12T17:26:31.455361572Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 17:26:31.465611 containerd[1911]: time="2025-12-12T17:26:31.464694908Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 17:26:31.465611 containerd[1911]: time="2025-12-12T17:26:31.464818436Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 17:26:31.465926 containerd[1911]: time="2025-12-12T17:26:31.464959772Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 17:26:31.465926 containerd[1911]: time="2025-12-12T17:26:31.465234740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 17:26:31.465926 containerd[1911]: time="2025-12-12T17:26:31.465314084Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 17:26:31.465926 containerd[1911]: time="2025-12-12T17:26:31.465344048Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 17:26:31.465926 containerd[1911]: time="2025-12-12T17:26:31.465371852Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 17:26:31.465926 containerd[1911]: time="2025-12-12T17:26:31.465401996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 17:26:31.465926 containerd[1911]: time="2025-12-12T17:26:31.465428396Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 17:26:31.465926 containerd[1911]: time="2025-12-12T17:26:31.465456896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 17:26:31.472593 containerd[1911]: time="2025-12-12T17:26:31.465484424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 17:26:31.472697 containerd[1911]: time="2025-12-12T17:26:31.472623885Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 17:26:31.472767 containerd[1911]: time="2025-12-12T17:26:31.472738413Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:26:31.472820 containerd[1911]: time="2025-12-12T17:26:31.472773993Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:26:31.472820 containerd[1911]: time="2025-12-12T17:26:31.472798905Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:26:31.472907 containerd[1911]: time="2025-12-12T17:26:31.472825521Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:26:31.472907 containerd[1911]: time="2025-12-12T17:26:31.472847517Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 17:26:31.472907 containerd[1911]: time="2025-12-12T17:26:31.472873173Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 17:26:31.473072 containerd[1911]: time="2025-12-12T17:26:31.472903593Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 17:26:31.473072 containerd[1911]: time="2025-12-12T17:26:31.472931625Z" level=info msg="runtime interface created" Dec 12 17:26:31.473072 containerd[1911]: time="2025-12-12T17:26:31.472946157Z" level=info msg="created NRI interface" Dec 12 17:26:31.473072 containerd[1911]: time="2025-12-12T17:26:31.472966677Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 17:26:31.473072 containerd[1911]: time="2025-12-12T17:26:31.472996965Z" level=info msg="Connect containerd service" Dec 12 17:26:31.473072 containerd[1911]: time="2025-12-12T17:26:31.473047329Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 17:26:31.486180 containerd[1911]: time="2025-12-12T17:26:31.486022245Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:26:31.490532 sshd_keygen[1936]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 17:26:31.526264 amazon-ssm-agent[1960]: 2025-12-12 17:26:31.2221 INFO no_proxy: Dec 12 17:26:31.534342 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 12 17:26:31.538280 dbus-daemon[1871]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 12 17:26:31.545072 dbus-daemon[1871]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1954 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 12 17:26:31.556733 systemd[1]: Starting polkit.service - Authorization Manager... Dec 12 17:26:31.598470 update-ssh-keys[2042]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:26:31.601740 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 17:26:31.616175 systemd[1]: Finished sshkeys.service. Dec 12 17:26:31.637868 amazon-ssm-agent[1960]: 2025-12-12 17:26:31.2304 INFO Checking if agent identity type OnPrem can be assumed Dec 12 17:26:31.738725 amazon-ssm-agent[1960]: 2025-12-12 17:26:31.2306 INFO Checking if agent identity type EC2 can be assumed Dec 12 17:26:31.740184 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 17:26:31.748830 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 17:26:31.841765 amazon-ssm-agent[1960]: 2025-12-12 17:26:31.6819 INFO Agent will take identity from EC2 Dec 12 17:26:31.841535 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 17:26:31.843277 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 17:26:31.853014 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 17:26:31.941550 amazon-ssm-agent[1960]: 2025-12-12 17:26:31.7033 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Dec 12 17:26:31.975668 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 17:26:31.988143 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 17:26:32.000106 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 17:26:32.003108 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 17:26:32.041389 amazon-ssm-agent[1960]: 2025-12-12 17:26:31.7033 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Dec 12 17:26:32.149535 amazon-ssm-agent[1960]: 2025-12-12 17:26:31.7033 INFO [amazon-ssm-agent] Starting Core Agent Dec 12 17:26:32.252610 amazon-ssm-agent[1960]: 2025-12-12 17:26:31.7033 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Dec 12 17:26:32.276001 polkitd[2070]: Started polkitd version 126 Dec 12 17:26:32.329316 polkitd[2070]: Loading rules from directory /etc/polkit-1/rules.d Dec 12 17:26:32.337063 polkitd[2070]: Loading rules from directory /run/polkit-1/rules.d Dec 12 17:26:32.337819 polkitd[2070]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 17:26:32.342960 containerd[1911]: time="2025-12-12T17:26:32.341766057Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 17:26:32.342960 containerd[1911]: time="2025-12-12T17:26:32.341905221Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 17:26:32.342771 polkitd[2070]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 12 17:26:32.342863 polkitd[2070]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 17:26:32.342963 polkitd[2070]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 12 17:26:32.345946 containerd[1911]: time="2025-12-12T17:26:32.345715725Z" level=info msg="Start subscribing containerd event" Dec 12 17:26:32.346645 containerd[1911]: time="2025-12-12T17:26:32.346572885Z" level=info msg="Start recovering state" Dec 12 17:26:32.349550 containerd[1911]: time="2025-12-12T17:26:32.347724309Z" level=info msg="Start event monitor" Dec 12 17:26:32.349550 containerd[1911]: time="2025-12-12T17:26:32.347783769Z" level=info msg="Start cni network conf syncer for default" Dec 12 17:26:32.349550 containerd[1911]: time="2025-12-12T17:26:32.347817237Z" level=info msg="Start streaming server" Dec 12 17:26:32.349550 containerd[1911]: time="2025-12-12T17:26:32.347840217Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 17:26:32.349550 containerd[1911]: time="2025-12-12T17:26:32.347860125Z" level=info msg="runtime interface starting up..." Dec 12 17:26:32.349550 containerd[1911]: time="2025-12-12T17:26:32.347878233Z" level=info msg="starting plugins..." Dec 12 17:26:32.349550 containerd[1911]: time="2025-12-12T17:26:32.347924109Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 17:26:32.349641 polkitd[2070]: Finished loading, compiling and executing 2 rules Dec 12 17:26:32.360372 dbus-daemon[1871]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 12 17:26:32.362408 amazon-ssm-agent[1960]: 2025-12-12 17:26:31.7033 INFO [Registrar] Starting registrar module Dec 12 17:26:32.350268 systemd[1]: Started polkit.service - Authorization Manager. Dec 12 17:26:32.361165 polkitd[2070]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 12 17:26:32.363449 containerd[1911]: time="2025-12-12T17:26:32.362749425Z" level=info msg="containerd successfully booted in 1.038187s" Dec 12 17:26:32.354437 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 17:26:32.420763 systemd-hostnamed[1954]: Hostname set to (transient) Dec 12 17:26:32.420799 systemd-resolved[1475]: System hostname changed to 'ip-172-31-17-228'. Dec 12 17:26:32.460727 amazon-ssm-agent[1960]: 2025-12-12 17:26:31.7105 INFO [EC2Identity] Checking disk for registration info Dec 12 17:26:32.543912 tar[1895]: linux-arm64/README.md Dec 12 17:26:32.561336 amazon-ssm-agent[1960]: 2025-12-12 17:26:31.7106 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Dec 12 17:26:32.576816 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 17:26:32.588537 amazon-ssm-agent[1960]: 2025/12/12 17:26:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:26:32.588537 amazon-ssm-agent[1960]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:26:32.588537 amazon-ssm-agent[1960]: 2025/12/12 17:26:32 processing appconfig overrides Dec 12 17:26:32.632957 amazon-ssm-agent[1960]: 2025-12-12 17:26:31.7106 INFO [EC2Identity] Generating registration keypair Dec 12 17:26:32.632957 amazon-ssm-agent[1960]: 2025-12-12 17:26:32.5292 INFO [EC2Identity] Checking write access before registering Dec 12 17:26:32.632957 amazon-ssm-agent[1960]: 2025-12-12 17:26:32.5310 INFO [EC2Identity] Registering EC2 instance with Systems Manager Dec 12 17:26:32.632957 amazon-ssm-agent[1960]: 2025-12-12 17:26:32.5877 INFO [EC2Identity] EC2 registration was successful. Dec 12 17:26:32.632957 amazon-ssm-agent[1960]: 2025-12-12 17:26:32.5878 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Dec 12 17:26:32.632957 amazon-ssm-agent[1960]: 2025-12-12 17:26:32.5879 INFO [CredentialRefresher] credentialRefresher has started Dec 12 17:26:32.632957 amazon-ssm-agent[1960]: 2025-12-12 17:26:32.5880 INFO [CredentialRefresher] Starting credentials refresher loop Dec 12 17:26:32.632957 amazon-ssm-agent[1960]: 2025-12-12 17:26:32.6322 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 12 17:26:32.632957 amazon-ssm-agent[1960]: 2025-12-12 17:26:32.6326 INFO [CredentialRefresher] Credentials ready Dec 12 17:26:32.661577 amazon-ssm-agent[1960]: 2025-12-12 17:26:32.6328 INFO [CredentialRefresher] Next credential rotation will be in 29.9999904925 minutes Dec 12 17:26:33.668099 amazon-ssm-agent[1960]: 2025-12-12 17:26:33.6644 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 12 17:26:33.769332 amazon-ssm-agent[1960]: 2025-12-12 17:26:33.6689 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2152) started Dec 12 17:26:33.869672 amazon-ssm-agent[1960]: 2025-12-12 17:26:33.6689 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 12 17:26:34.063386 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:34.068758 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 17:26:34.074118 systemd[1]: Startup finished in 4.131s (kernel) + 11.867s (initrd) + 14.385s (userspace) = 30.384s. Dec 12 17:26:34.087077 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:26:35.067269 kubelet[2168]: E1212 17:26:35.067171 2168 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:26:35.071698 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:26:35.072031 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:26:35.072828 systemd[1]: kubelet.service: Consumed 1.518s CPU time, 259M memory peak. Dec 12 17:26:37.133058 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 17:26:37.135750 systemd[1]: Started sshd@0-172.31.17.228:22-139.178.68.195:52662.service - OpenSSH per-connection server daemon (139.178.68.195:52662). Dec 12 17:26:37.560873 systemd-resolved[1475]: Clock change detected. Flushing caches. Dec 12 17:26:37.768503 sshd[2180]: Accepted publickey for core from 139.178.68.195 port 52662 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:26:37.772397 sshd-session[2180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:26:37.791546 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 17:26:37.793848 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 17:26:37.800549 systemd-logind[1883]: New session 1 of user core. Dec 12 17:26:37.836923 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 17:26:37.842738 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 17:26:37.863966 (systemd)[2185]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 17:26:37.868919 systemd-logind[1883]: New session c1 of user core. Dec 12 17:26:38.150179 systemd[2185]: Queued start job for default target default.target. Dec 12 17:26:38.159945 systemd[2185]: Created slice app.slice - User Application Slice. Dec 12 17:26:38.160017 systemd[2185]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 12 17:26:38.160048 systemd[2185]: Reached target paths.target - Paths. Dec 12 17:26:38.160147 systemd[2185]: Reached target timers.target - Timers. Dec 12 17:26:38.162547 systemd[2185]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 17:26:38.165996 systemd[2185]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 12 17:26:38.195588 systemd[2185]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 17:26:38.195810 systemd[2185]: Reached target sockets.target - Sockets. Dec 12 17:26:38.197527 systemd[2185]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 12 17:26:38.197722 systemd[2185]: Reached target basic.target - Basic System. Dec 12 17:26:38.197849 systemd[2185]: Reached target default.target - Main User Target. Dec 12 17:26:38.197912 systemd[2185]: Startup finished in 317ms. Dec 12 17:26:38.198262 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 17:26:38.211008 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 17:26:38.304790 systemd[1]: Started sshd@1-172.31.17.228:22-139.178.68.195:52678.service - OpenSSH per-connection server daemon (139.178.68.195:52678). Dec 12 17:26:38.490612 sshd[2198]: Accepted publickey for core from 139.178.68.195 port 52678 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:26:38.493171 sshd-session[2198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:26:38.501995 systemd-logind[1883]: New session 2 of user core. Dec 12 17:26:38.522933 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 17:26:38.588738 sshd[2201]: Connection closed by 139.178.68.195 port 52678 Dec 12 17:26:38.589575 sshd-session[2198]: pam_unix(sshd:session): session closed for user core Dec 12 17:26:38.597004 systemd[1]: sshd@1-172.31.17.228:22-139.178.68.195:52678.service: Deactivated successfully. Dec 12 17:26:38.600610 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 17:26:38.602770 systemd-logind[1883]: Session 2 logged out. Waiting for processes to exit. Dec 12 17:26:38.605554 systemd-logind[1883]: Removed session 2. Dec 12 17:26:38.623923 systemd[1]: Started sshd@2-172.31.17.228:22-139.178.68.195:52688.service - OpenSSH per-connection server daemon (139.178.68.195:52688). Dec 12 17:26:38.802690 sshd[2207]: Accepted publickey for core from 139.178.68.195 port 52688 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:26:38.805385 sshd-session[2207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:26:38.815547 systemd-logind[1883]: New session 3 of user core. Dec 12 17:26:38.817978 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 17:26:38.875621 sshd[2210]: Connection closed by 139.178.68.195 port 52688 Dec 12 17:26:38.876508 sshd-session[2207]: pam_unix(sshd:session): session closed for user core Dec 12 17:26:38.882934 systemd[1]: sshd@2-172.31.17.228:22-139.178.68.195:52688.service: Deactivated successfully. Dec 12 17:26:38.887977 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 17:26:38.893328 systemd-logind[1883]: Session 3 logged out. Waiting for processes to exit. Dec 12 17:26:38.895248 systemd-logind[1883]: Removed session 3. Dec 12 17:26:38.916190 systemd[1]: Started sshd@3-172.31.17.228:22-139.178.68.195:52690.service - OpenSSH per-connection server daemon (139.178.68.195:52690). Dec 12 17:26:39.102518 sshd[2216]: Accepted publickey for core from 139.178.68.195 port 52690 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:26:39.104908 sshd-session[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:26:39.113756 systemd-logind[1883]: New session 4 of user core. Dec 12 17:26:39.133022 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 17:26:39.197692 sshd[2219]: Connection closed by 139.178.68.195 port 52690 Dec 12 17:26:39.198550 sshd-session[2216]: pam_unix(sshd:session): session closed for user core Dec 12 17:26:39.205626 systemd[1]: sshd@3-172.31.17.228:22-139.178.68.195:52690.service: Deactivated successfully. Dec 12 17:26:39.209424 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 17:26:39.213334 systemd-logind[1883]: Session 4 logged out. Waiting for processes to exit. Dec 12 17:26:39.217276 systemd-logind[1883]: Removed session 4. Dec 12 17:26:39.235892 systemd[1]: Started sshd@4-172.31.17.228:22-139.178.68.195:52706.service - OpenSSH per-connection server daemon (139.178.68.195:52706). Dec 12 17:26:39.428603 sshd[2225]: Accepted publickey for core from 139.178.68.195 port 52706 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:26:39.431757 sshd-session[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:26:39.442598 systemd-logind[1883]: New session 5 of user core. Dec 12 17:26:39.449975 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 17:26:39.596429 sudo[2229]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 17:26:39.597177 sudo[2229]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:26:39.616918 sudo[2229]: pam_unix(sudo:session): session closed for user root Dec 12 17:26:39.642701 sshd[2228]: Connection closed by 139.178.68.195 port 52706 Dec 12 17:26:39.641805 sshd-session[2225]: pam_unix(sshd:session): session closed for user core Dec 12 17:26:39.651524 systemd-logind[1883]: Session 5 logged out. Waiting for processes to exit. Dec 12 17:26:39.652611 systemd[1]: sshd@4-172.31.17.228:22-139.178.68.195:52706.service: Deactivated successfully. Dec 12 17:26:39.658179 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 17:26:39.663318 systemd-logind[1883]: Removed session 5. Dec 12 17:26:39.678869 systemd[1]: Started sshd@5-172.31.17.228:22-139.178.68.195:52720.service - OpenSSH per-connection server daemon (139.178.68.195:52720). Dec 12 17:26:39.872101 sshd[2235]: Accepted publickey for core from 139.178.68.195 port 52720 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:26:39.874595 sshd-session[2235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:26:39.883332 systemd-logind[1883]: New session 6 of user core. Dec 12 17:26:39.891963 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 17:26:39.937592 sudo[2240]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 17:26:39.939088 sudo[2240]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:26:39.949742 sudo[2240]: pam_unix(sudo:session): session closed for user root Dec 12 17:26:39.962919 sudo[2239]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 17:26:39.963693 sudo[2239]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:26:39.983190 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:26:40.047000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 17:26:40.050264 kernel: kauditd_printk_skb: 87 callbacks suppressed Dec 12 17:26:40.050344 kernel: audit: type=1305 audit(1765560400.047:244): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 17:26:40.050698 augenrules[2262]: No rules Dec 12 17:26:40.053998 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:26:40.047000 audit[2262]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff3e0a950 a2=420 a3=0 items=0 ppid=2243 pid=2262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:40.054573 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:26:40.060606 kernel: audit: type=1300 audit(1765560400.047:244): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff3e0a950 a2=420 a3=0 items=0 ppid=2243 pid=2262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:40.047000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:26:40.063557 kernel: audit: type=1327 audit(1765560400.047:244): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:26:40.063720 kernel: audit: type=1130 audit(1765560400.054:245): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:40.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:40.064210 sudo[2239]: pam_unix(sudo:session): session closed for user root Dec 12 17:26:40.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:40.072978 kernel: audit: type=1131 audit(1765560400.054:246): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:40.073103 kernel: audit: type=1106 audit(1765560400.062:247): pid=2239 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:40.062000 audit[2239]: USER_END pid=2239 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:40.062000 audit[2239]: CRED_DISP pid=2239 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:40.082510 kernel: audit: type=1104 audit(1765560400.062:248): pid=2239 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:40.090009 sshd[2238]: Connection closed by 139.178.68.195 port 52720 Dec 12 17:26:40.091034 sshd-session[2235]: pam_unix(sshd:session): session closed for user core Dec 12 17:26:40.092000 audit[2235]: USER_END pid=2235 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:26:40.100581 systemd[1]: sshd@5-172.31.17.228:22-139.178.68.195:52720.service: Deactivated successfully. Dec 12 17:26:40.092000 audit[2235]: CRED_DISP pid=2235 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:26:40.106897 kernel: audit: type=1106 audit(1765560400.092:249): pid=2235 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:26:40.107025 kernel: audit: type=1104 audit(1765560400.092:250): pid=2235 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:26:40.107073 kernel: audit: type=1131 audit(1765560400.101:251): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.17.228:22-139.178.68.195:52720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:40.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.17.228:22-139.178.68.195:52720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:40.108557 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 17:26:40.112741 systemd-logind[1883]: Session 6 logged out. Waiting for processes to exit. Dec 12 17:26:40.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.17.228:22-139.178.68.195:52728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:40.134094 systemd[1]: Started sshd@6-172.31.17.228:22-139.178.68.195:52728.service - OpenSSH per-connection server daemon (139.178.68.195:52728). Dec 12 17:26:40.136909 systemd-logind[1883]: Removed session 6. Dec 12 17:26:40.312000 audit[2271]: USER_ACCT pid=2271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:26:40.315247 sshd[2271]: Accepted publickey for core from 139.178.68.195 port 52728 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:26:40.314000 audit[2271]: CRED_ACQ pid=2271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:26:40.314000 audit[2271]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd67bcfb0 a2=3 a3=0 items=0 ppid=1 pid=2271 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:40.314000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:26:40.316919 sshd-session[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:26:40.326333 systemd-logind[1883]: New session 7 of user core. Dec 12 17:26:40.337041 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 17:26:40.340000 audit[2271]: USER_START pid=2271 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:26:40.344000 audit[2274]: CRED_ACQ pid=2274 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:26:40.379000 audit[2275]: USER_ACCT pid=2275 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:40.382043 sudo[2275]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 17:26:40.381000 audit[2275]: CRED_REFR pid=2275 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:40.383331 sudo[2275]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:26:40.385000 audit[2275]: USER_START pid=2275 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:41.681207 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 17:26:41.698157 (dockerd)[2292]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 17:26:42.754708 dockerd[2292]: time="2025-12-12T17:26:42.754112945Z" level=info msg="Starting up" Dec 12 17:26:42.759006 dockerd[2292]: time="2025-12-12T17:26:42.758937869Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 17:26:42.779784 dockerd[2292]: time="2025-12-12T17:26:42.779729033Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 17:26:42.873022 dockerd[2292]: time="2025-12-12T17:26:42.872967534Z" level=info msg="Loading containers: start." Dec 12 17:26:42.895722 kernel: Initializing XFRM netlink socket Dec 12 17:26:43.054000 audit[2340]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2340 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:43.054000 audit[2340]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffcff501e0 a2=0 a3=0 items=0 ppid=2292 pid=2340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.054000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 17:26:43.059000 audit[2342]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2342 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:43.059000 audit[2342]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffef767240 a2=0 a3=0 items=0 ppid=2292 pid=2342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.059000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 17:26:43.064000 audit[2344]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2344 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:43.064000 audit[2344]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdf0e9e50 a2=0 a3=0 items=0 ppid=2292 pid=2344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.064000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 17:26:43.068000 audit[2346]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2346 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:43.068000 audit[2346]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe3b45010 a2=0 a3=0 items=0 ppid=2292 pid=2346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.068000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 17:26:43.073000 audit[2348]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2348 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:43.073000 audit[2348]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc28b0d50 a2=0 a3=0 items=0 ppid=2292 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.073000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 17:26:43.077000 audit[2350]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2350 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:43.077000 audit[2350]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffffedbaad0 a2=0 a3=0 items=0 ppid=2292 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.077000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:26:43.081000 audit[2352]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2352 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:43.081000 audit[2352]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff0722ce0 a2=0 a3=0 items=0 ppid=2292 pid=2352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.081000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:26:43.085000 audit[2354]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2354 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:43.085000 audit[2354]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffd3435010 a2=0 a3=0 items=0 ppid=2292 pid=2354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.085000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 17:26:43.161000 audit[2357]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2357 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:43.161000 audit[2357]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffd771faf0 a2=0 a3=0 items=0 ppid=2292 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.161000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 12 17:26:43.165000 audit[2359]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2359 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:43.165000 audit[2359]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffeec0d3c0 a2=0 a3=0 items=0 ppid=2292 pid=2359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.165000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 17:26:43.169000 audit[2361]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2361 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:43.169000 audit[2361]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffd14cef40 a2=0 a3=0 items=0 ppid=2292 pid=2361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.169000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 17:26:43.173000 audit[2363]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2363 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:43.173000 audit[2363]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffdc089720 a2=0 a3=0 items=0 ppid=2292 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.173000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:26:43.178000 audit[2365]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2365 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:43.178000 audit[2365]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffd3559de0 a2=0 a3=0 items=0 ppid=2292 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.178000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 17:26:43.248000 audit[2395]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2395 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:43.248000 audit[2395]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffcf775c70 a2=0 a3=0 items=0 ppid=2292 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.248000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 17:26:43.253000 audit[2397]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2397 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:43.253000 audit[2397]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe7a02480 a2=0 a3=0 items=0 ppid=2292 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.253000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 17:26:43.258000 audit[2399]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2399 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:43.258000 audit[2399]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffee0feca0 a2=0 a3=0 items=0 ppid=2292 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.258000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 17:26:43.262000 audit[2401]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2401 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:43.262000 audit[2401]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffea787dd0 a2=0 a3=0 items=0 ppid=2292 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.262000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 17:26:43.267000 audit[2403]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2403 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:43.267000 audit[2403]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff9578980 a2=0 a3=0 items=0 ppid=2292 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.267000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 17:26:43.271000 audit[2405]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2405 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:43.271000 audit[2405]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd0bc6320 a2=0 a3=0 items=0 ppid=2292 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.271000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:26:43.275000 audit[2407]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2407 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:43.275000 audit[2407]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff897eeb0 a2=0 a3=0 items=0 ppid=2292 pid=2407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.275000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:26:43.279000 audit[2409]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2409 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:43.279000 audit[2409]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffefa26cc0 a2=0 a3=0 items=0 ppid=2292 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.279000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 17:26:43.283000 audit[2411]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2411 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:43.283000 audit[2411]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=fffff149f860 a2=0 a3=0 items=0 ppid=2292 pid=2411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.283000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 12 17:26:43.288000 audit[2413]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2413 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:43.288000 audit[2413]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe940c460 a2=0 a3=0 items=0 ppid=2292 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.288000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 17:26:43.292000 audit[2415]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2415 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:43.292000 audit[2415]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffec91e0f0 a2=0 a3=0 items=0 ppid=2292 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.292000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 17:26:43.297000 audit[2417]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2417 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:43.297000 audit[2417]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc2c29870 a2=0 a3=0 items=0 ppid=2292 pid=2417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.297000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:26:43.302000 audit[2419]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2419 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:43.302000 audit[2419]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffcf5041f0 a2=0 a3=0 items=0 ppid=2292 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.302000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 17:26:43.313000 audit[2424]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2424 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:43.313000 audit[2424]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdbf0e830 a2=0 a3=0 items=0 ppid=2292 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.313000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 17:26:43.318000 audit[2426]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2426 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:43.318000 audit[2426]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffe073a970 a2=0 a3=0 items=0 ppid=2292 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.318000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 17:26:43.322000 audit[2428]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:43.322000 audit[2428]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe3714040 a2=0 a3=0 items=0 ppid=2292 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.322000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 17:26:43.326000 audit[2430]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2430 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:43.326000 audit[2430]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffda0b33b0 a2=0 a3=0 items=0 ppid=2292 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.326000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 17:26:43.330000 audit[2432]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2432 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:43.330000 audit[2432]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc1fad840 a2=0 a3=0 items=0 ppid=2292 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.330000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 17:26:43.334000 audit[2434]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2434 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:43.334000 audit[2434]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffdc6be910 a2=0 a3=0 items=0 ppid=2292 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.334000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 17:26:43.354216 (udev-worker)[2313]: Network interface NamePolicy= disabled on kernel command line. Dec 12 17:26:43.370000 audit[2439]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2439 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:43.370000 audit[2439]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffd4ffab00 a2=0 a3=0 items=0 ppid=2292 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.370000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 12 17:26:43.378000 audit[2441]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2441 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:43.378000 audit[2441]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd25e19c0 a2=0 a3=0 items=0 ppid=2292 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.378000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 12 17:26:43.395000 audit[2449]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:43.395000 audit[2449]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffee56abb0 a2=0 a3=0 items=0 ppid=2292 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.395000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 12 17:26:43.413000 audit[2455]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:43.413000 audit[2455]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffc2f374d0 a2=0 a3=0 items=0 ppid=2292 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.413000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 12 17:26:43.418000 audit[2457]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2457 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:43.418000 audit[2457]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=fffff0c8b000 a2=0 a3=0 items=0 ppid=2292 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.418000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 12 17:26:43.422000 audit[2459]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2459 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:43.422000 audit[2459]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe31f9650 a2=0 a3=0 items=0 ppid=2292 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.422000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 12 17:26:43.426000 audit[2461]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2461 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:43.426000 audit[2461]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffde5f9d90 a2=0 a3=0 items=0 ppid=2292 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.426000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:26:43.430000 audit[2463]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2463 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:43.430000 audit[2463]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff337af90 a2=0 a3=0 items=0 ppid=2292 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.430000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 12 17:26:43.433736 systemd-networkd[1628]: docker0: Link UP Dec 12 17:26:43.443994 dockerd[2292]: time="2025-12-12T17:26:43.443925377Z" level=info msg="Loading containers: done." Dec 12 17:26:43.500320 dockerd[2292]: time="2025-12-12T17:26:43.500253461Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 17:26:43.500548 dockerd[2292]: time="2025-12-12T17:26:43.500377157Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 17:26:43.500731 dockerd[2292]: time="2025-12-12T17:26:43.500694461Z" level=info msg="Initializing buildkit" Dec 12 17:26:43.553035 dockerd[2292]: time="2025-12-12T17:26:43.552958553Z" level=info msg="Completed buildkit initialization" Dec 12 17:26:43.568787 dockerd[2292]: time="2025-12-12T17:26:43.568586297Z" level=info msg="Daemon has completed initialization" Dec 12 17:26:43.569038 dockerd[2292]: time="2025-12-12T17:26:43.568820861Z" level=info msg="API listen on /run/docker.sock" Dec 12 17:26:43.569876 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 17:26:43.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:43.814297 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck557927717-merged.mount: Deactivated successfully. Dec 12 17:26:45.337699 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 17:26:45.341144 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:26:45.527910 containerd[1911]: time="2025-12-12T17:26:45.527846191Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 12 17:26:45.740646 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:45.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:45.742224 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 12 17:26:45.742338 kernel: audit: type=1130 audit(1765560405.739:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:45.760187 (kubelet)[2512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:26:45.836549 kubelet[2512]: E1212 17:26:45.836467 2512 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:26:45.845164 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:26:45.845642 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:26:45.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:26:45.848822 systemd[1]: kubelet.service: Consumed 321ms CPU time, 105.6M memory peak. Dec 12 17:26:45.854721 kernel: audit: type=1131 audit(1765560405.847:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:26:46.300515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3284099411.mount: Deactivated successfully. Dec 12 17:26:47.671824 containerd[1911]: time="2025-12-12T17:26:47.671749378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:47.673886 containerd[1911]: time="2025-12-12T17:26:47.673801546Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=25791187" Dec 12 17:26:47.676281 containerd[1911]: time="2025-12-12T17:26:47.675258514Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:47.682318 containerd[1911]: time="2025-12-12T17:26:47.682241062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:47.684690 containerd[1911]: time="2025-12-12T17:26:47.684598594Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 2.156680895s" Dec 12 17:26:47.684935 containerd[1911]: time="2025-12-12T17:26:47.684899146Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Dec 12 17:26:47.688047 containerd[1911]: time="2025-12-12T17:26:47.687852466Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 12 17:26:49.334505 containerd[1911]: time="2025-12-12T17:26:49.332781130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:49.335763 containerd[1911]: time="2025-12-12T17:26:49.335692642Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23544927" Dec 12 17:26:49.337207 containerd[1911]: time="2025-12-12T17:26:49.337154470Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:49.343381 containerd[1911]: time="2025-12-12T17:26:49.343305466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:49.345545 containerd[1911]: time="2025-12-12T17:26:49.345473254Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.65719414s" Dec 12 17:26:49.345545 containerd[1911]: time="2025-12-12T17:26:49.345536206Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Dec 12 17:26:49.346555 containerd[1911]: time="2025-12-12T17:26:49.346501834Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 12 17:26:50.833701 containerd[1911]: time="2025-12-12T17:26:50.832688437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:50.836411 containerd[1911]: time="2025-12-12T17:26:50.836327581Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18289931" Dec 12 17:26:50.838321 containerd[1911]: time="2025-12-12T17:26:50.838254901Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:50.846202 containerd[1911]: time="2025-12-12T17:26:50.846118333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:50.849337 containerd[1911]: time="2025-12-12T17:26:50.849144350Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.502583824s" Dec 12 17:26:50.849337 containerd[1911]: time="2025-12-12T17:26:50.849199862Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Dec 12 17:26:50.850310 containerd[1911]: time="2025-12-12T17:26:50.850242218Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 12 17:26:52.176742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3316796163.mount: Deactivated successfully. Dec 12 17:26:52.831191 containerd[1911]: time="2025-12-12T17:26:52.831094671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:52.834151 containerd[1911]: time="2025-12-12T17:26:52.834040707Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=18413667" Dec 12 17:26:52.836811 containerd[1911]: time="2025-12-12T17:26:52.836642055Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:52.842292 containerd[1911]: time="2025-12-12T17:26:52.842209311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:52.843926 containerd[1911]: time="2025-12-12T17:26:52.843864891Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.993557633s" Dec 12 17:26:52.844275 containerd[1911]: time="2025-12-12T17:26:52.844117215Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Dec 12 17:26:52.845368 containerd[1911]: time="2025-12-12T17:26:52.845134455Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 12 17:26:53.469017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount882748142.mount: Deactivated successfully. Dec 12 17:26:54.706038 containerd[1911]: time="2025-12-12T17:26:54.705938621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:54.709314 containerd[1911]: time="2025-12-12T17:26:54.709206437Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=18338344" Dec 12 17:26:54.712075 containerd[1911]: time="2025-12-12T17:26:54.711968849Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:54.718491 containerd[1911]: time="2025-12-12T17:26:54.718389725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:54.721015 containerd[1911]: time="2025-12-12T17:26:54.720769385Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.875576166s" Dec 12 17:26:54.721015 containerd[1911]: time="2025-12-12T17:26:54.720835469Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Dec 12 17:26:54.721892 containerd[1911]: time="2025-12-12T17:26:54.721817249Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 17:26:55.250759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount866889861.mount: Deactivated successfully. Dec 12 17:26:55.263719 containerd[1911]: time="2025-12-12T17:26:55.263412171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:26:55.267009 containerd[1911]: time="2025-12-12T17:26:55.266902119Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 17:26:55.269338 containerd[1911]: time="2025-12-12T17:26:55.269211747Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:26:55.276927 containerd[1911]: time="2025-12-12T17:26:55.276759783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:26:55.279107 containerd[1911]: time="2025-12-12T17:26:55.278848408Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 556.760943ms" Dec 12 17:26:55.279107 containerd[1911]: time="2025-12-12T17:26:55.278923936Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 12 17:26:55.280408 containerd[1911]: time="2025-12-12T17:26:55.280103296Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 12 17:26:55.865717 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 17:26:55.872048 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:26:55.916897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3624799497.mount: Deactivated successfully. Dec 12 17:26:56.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:56.344102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:56.351899 kernel: audit: type=1130 audit(1765560416.342:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:56.365180 (kubelet)[2671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:26:56.503033 kubelet[2671]: E1212 17:26:56.502927 2671 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:26:56.508222 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:26:56.509082 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:26:56.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:26:56.511930 systemd[1]: kubelet.service: Consumed 350ms CPU time, 107.1M memory peak. Dec 12 17:26:56.520723 kernel: audit: type=1131 audit(1765560416.510:305): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:26:58.430046 containerd[1911]: time="2025-12-12T17:26:58.429971047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:58.431805 containerd[1911]: time="2025-12-12T17:26:58.431709871Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=57926377" Dec 12 17:26:58.433720 containerd[1911]: time="2025-12-12T17:26:58.433095775Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:58.439241 containerd[1911]: time="2025-12-12T17:26:58.439178407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:58.441777 containerd[1911]: time="2025-12-12T17:26:58.441713395Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.161533359s" Dec 12 17:26:58.442004 containerd[1911]: time="2025-12-12T17:26:58.441964219Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Dec 12 17:27:02.700048 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 12 17:27:02.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:02.710822 kernel: audit: type=1131 audit(1765560422.699:306): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:02.717000 audit: BPF prog-id=65 op=UNLOAD Dec 12 17:27:02.720700 kernel: audit: type=1334 audit(1765560422.717:307): prog-id=65 op=UNLOAD Dec 12 17:27:05.740424 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:27:05.740868 systemd[1]: kubelet.service: Consumed 350ms CPU time, 107.1M memory peak. Dec 12 17:27:05.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:05.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:05.751060 kernel: audit: type=1130 audit(1765560425.739:308): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:05.751187 kernel: audit: type=1131 audit(1765560425.739:309): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:05.756106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:27:05.802744 systemd[1]: Reload requested from client PID 2753 ('systemctl') (unit session-7.scope)... Dec 12 17:27:05.802774 systemd[1]: Reloading... Dec 12 17:27:06.066588 zram_generator::config[2803]: No configuration found. Dec 12 17:27:06.548542 systemd[1]: Reloading finished in 745 ms. Dec 12 17:27:06.605000 audit: BPF prog-id=69 op=LOAD Dec 12 17:27:06.612259 kernel: audit: type=1334 audit(1765560426.605:310): prog-id=69 op=LOAD Dec 12 17:27:06.612375 kernel: audit: type=1334 audit(1765560426.608:311): prog-id=54 op=UNLOAD Dec 12 17:27:06.608000 audit: BPF prog-id=54 op=UNLOAD Dec 12 17:27:06.608000 audit: BPF prog-id=70 op=LOAD Dec 12 17:27:06.614052 kernel: audit: type=1334 audit(1765560426.608:312): prog-id=70 op=LOAD Dec 12 17:27:06.608000 audit: BPF prog-id=68 op=UNLOAD Dec 12 17:27:06.615830 kernel: audit: type=1334 audit(1765560426.608:313): prog-id=68 op=UNLOAD Dec 12 17:27:06.617540 kernel: audit: type=1334 audit(1765560426.614:314): prog-id=71 op=LOAD Dec 12 17:27:06.614000 audit: BPF prog-id=71 op=LOAD Dec 12 17:27:06.614000 audit: BPF prog-id=56 op=UNLOAD Dec 12 17:27:06.619354 kernel: audit: type=1334 audit(1765560426.614:315): prog-id=56 op=UNLOAD Dec 12 17:27:06.617000 audit: BPF prog-id=72 op=LOAD Dec 12 17:27:06.617000 audit: BPF prog-id=73 op=LOAD Dec 12 17:27:06.617000 audit: BPF prog-id=57 op=UNLOAD Dec 12 17:27:06.617000 audit: BPF prog-id=58 op=UNLOAD Dec 12 17:27:06.619000 audit: BPF prog-id=74 op=LOAD Dec 12 17:27:06.622000 audit: BPF prog-id=48 op=UNLOAD Dec 12 17:27:06.622000 audit: BPF prog-id=75 op=LOAD Dec 12 17:27:06.622000 audit: BPF prog-id=76 op=LOAD Dec 12 17:27:06.622000 audit: BPF prog-id=49 op=UNLOAD Dec 12 17:27:06.622000 audit: BPF prog-id=50 op=UNLOAD Dec 12 17:27:06.623000 audit: BPF prog-id=77 op=LOAD Dec 12 17:27:06.623000 audit: BPF prog-id=51 op=UNLOAD Dec 12 17:27:06.624000 audit: BPF prog-id=78 op=LOAD Dec 12 17:27:06.624000 audit: BPF prog-id=79 op=LOAD Dec 12 17:27:06.624000 audit: BPF prog-id=52 op=UNLOAD Dec 12 17:27:06.624000 audit: BPF prog-id=53 op=UNLOAD Dec 12 17:27:06.625000 audit: BPF prog-id=80 op=LOAD Dec 12 17:27:06.625000 audit: BPF prog-id=81 op=LOAD Dec 12 17:27:06.625000 audit: BPF prog-id=46 op=UNLOAD Dec 12 17:27:06.625000 audit: BPF prog-id=47 op=UNLOAD Dec 12 17:27:06.644000 audit: BPF prog-id=82 op=LOAD Dec 12 17:27:06.644000 audit: BPF prog-id=55 op=UNLOAD Dec 12 17:27:06.648000 audit: BPF prog-id=83 op=LOAD Dec 12 17:27:06.649000 audit: BPF prog-id=62 op=UNLOAD Dec 12 17:27:06.649000 audit: BPF prog-id=84 op=LOAD Dec 12 17:27:06.649000 audit: BPF prog-id=85 op=LOAD Dec 12 17:27:06.649000 audit: BPF prog-id=63 op=UNLOAD Dec 12 17:27:06.649000 audit: BPF prog-id=64 op=UNLOAD Dec 12 17:27:06.650000 audit: BPF prog-id=86 op=LOAD Dec 12 17:27:06.650000 audit: BPF prog-id=59 op=UNLOAD Dec 12 17:27:06.650000 audit: BPF prog-id=87 op=LOAD Dec 12 17:27:06.650000 audit: BPF prog-id=88 op=LOAD Dec 12 17:27:06.650000 audit: BPF prog-id=60 op=UNLOAD Dec 12 17:27:06.650000 audit: BPF prog-id=61 op=UNLOAD Dec 12 17:27:06.682759 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 17:27:06.682964 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 17:27:06.684685 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:27:06.684809 systemd[1]: kubelet.service: Consumed 231ms CPU time, 95.3M memory peak. Dec 12 17:27:06.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:27:06.688435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:27:07.031366 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:27:07.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:07.054524 (kubelet)[2866]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:27:07.127103 kubelet[2866]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:27:07.127103 kubelet[2866]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:27:07.127103 kubelet[2866]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:27:07.128697 kubelet[2866]: I1212 17:27:07.127808 2866 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:27:08.985329 kubelet[2866]: I1212 17:27:08.985274 2866 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 17:27:08.986778 kubelet[2866]: I1212 17:27:08.986029 2866 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:27:08.986778 kubelet[2866]: I1212 17:27:08.986429 2866 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:27:09.034578 kubelet[2866]: I1212 17:27:09.033568 2866 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:27:09.035200 kubelet[2866]: E1212 17:27:09.035160 2866 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.17.228:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.228:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 17:27:09.053369 kubelet[2866]: I1212 17:27:09.053320 2866 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:27:09.059133 kubelet[2866]: I1212 17:27:09.059068 2866 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:27:09.059893 kubelet[2866]: I1212 17:27:09.059828 2866 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:27:09.060202 kubelet[2866]: I1212 17:27:09.059894 2866 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-228","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:27:09.060432 kubelet[2866]: I1212 17:27:09.060345 2866 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:27:09.060432 kubelet[2866]: I1212 17:27:09.060373 2866 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 17:27:09.062127 kubelet[2866]: I1212 17:27:09.062040 2866 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:27:09.068896 kubelet[2866]: I1212 17:27:09.068840 2866 kubelet.go:480] "Attempting to sync node with API server" Dec 12 17:27:09.069455 kubelet[2866]: I1212 17:27:09.069301 2866 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:27:09.070483 kubelet[2866]: I1212 17:27:09.070320 2866 kubelet.go:386] "Adding apiserver pod source" Dec 12 17:27:09.070483 kubelet[2866]: I1212 17:27:09.070369 2866 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:27:09.076939 kubelet[2866]: I1212 17:27:09.076871 2866 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 17:27:09.078683 kubelet[2866]: I1212 17:27:09.078484 2866 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:27:09.078889 kubelet[2866]: W1212 17:27:09.078847 2866 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 17:27:09.087744 kubelet[2866]: I1212 17:27:09.085484 2866 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:27:09.087744 kubelet[2866]: I1212 17:27:09.085562 2866 server.go:1289] "Started kubelet" Dec 12 17:27:09.087744 kubelet[2866]: E1212 17:27:09.085917 2866 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.17.228:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-228&limit=500&resourceVersion=0\": dial tcp 172.31.17.228:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 17:27:09.089902 kubelet[2866]: E1212 17:27:09.089856 2866 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.17.228:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.228:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 17:27:09.090248 kubelet[2866]: I1212 17:27:09.090177 2866 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:27:09.090919 kubelet[2866]: I1212 17:27:09.090889 2866 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:27:09.094094 kubelet[2866]: I1212 17:27:09.094050 2866 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:27:09.101106 kubelet[2866]: E1212 17:27:09.098505 2866 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.228:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.228:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-228.188087db1d20ed04 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-228,UID:ip-172-31-17-228,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-228,},FirstTimestamp:2025-12-12 17:27:09.085519108 +0000 UTC m=+2.023588679,LastTimestamp:2025-12-12 17:27:09.085519108 +0000 UTC m=+2.023588679,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-228,}" Dec 12 17:27:09.103113 kubelet[2866]: I1212 17:27:09.103006 2866 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:27:09.104829 kubelet[2866]: I1212 17:27:09.104773 2866 server.go:317] "Adding debug handlers to kubelet server" Dec 12 17:27:09.106602 kubelet[2866]: I1212 17:27:09.106530 2866 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:27:09.114605 kernel: kauditd_printk_skb: 36 callbacks suppressed Dec 12 17:27:09.114922 kernel: audit: type=1325 audit(1765560429.108:352): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2882 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:09.108000 audit[2882]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2882 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:09.115122 kubelet[2866]: E1212 17:27:09.111316 2866 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-228\" not found" Dec 12 17:27:09.115122 kubelet[2866]: I1212 17:27:09.111366 2866 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:27:09.115122 kubelet[2866]: I1212 17:27:09.111724 2866 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:27:09.115122 kubelet[2866]: I1212 17:27:09.111831 2866 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:27:09.115122 kubelet[2866]: E1212 17:27:09.112482 2866 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.17.228:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.228:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:27:09.115122 kubelet[2866]: E1212 17:27:09.112991 2866 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-228?timeout=10s\": dial tcp 172.31.17.228:6443: connect: connection refused" interval="200ms" Dec 12 17:27:09.115122 kubelet[2866]: I1212 17:27:09.115006 2866 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:27:09.108000 audit[2882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd9d35c10 a2=0 a3=0 items=0 ppid=2866 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.122843 kernel: audit: type=1300 audit(1765560429.108:352): arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd9d35c10 a2=0 a3=0 items=0 ppid=2866 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.122983 kubelet[2866]: E1212 17:27:09.120373 2866 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:27:09.122983 kubelet[2866]: I1212 17:27:09.122063 2866 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:27:09.122983 kubelet[2866]: I1212 17:27:09.122438 2866 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:27:09.130040 kernel: audit: type=1327 audit(1765560429.108:352): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 17:27:09.108000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 17:27:09.122000 audit[2883]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2883 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:09.133622 kernel: audit: type=1325 audit(1765560429.122:353): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2883 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:09.122000 audit[2883]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffffa40330 a2=0 a3=0 items=0 ppid=2866 pid=2883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.122000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 17:27:09.142735 kernel: audit: type=1300 audit(1765560429.122:353): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffffa40330 a2=0 a3=0 items=0 ppid=2866 pid=2883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.152952 kernel: audit: type=1327 audit(1765560429.122:353): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 17:27:09.153080 kernel: audit: type=1325 audit(1765560429.139:354): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2887 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:09.139000 audit[2887]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2887 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:09.160192 kernel: audit: type=1300 audit(1765560429.139:354): arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd374abb0 a2=0 a3=0 items=0 ppid=2866 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.139000 audit[2887]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd374abb0 a2=0 a3=0 items=0 ppid=2866 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.139000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:27:09.165713 kernel: audit: type=1327 audit(1765560429.139:354): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:27:09.152000 audit[2889]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2889 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:09.169764 kernel: audit: type=1325 audit(1765560429.152:355): table=filter:45 family=2 entries=2 op=nft_register_chain pid=2889 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:09.152000 audit[2889]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe331ade0 a2=0 a3=0 items=0 ppid=2866 pid=2889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.152000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:27:09.181000 audit[2895]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2895 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:09.181000 audit[2895]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffceb958c0 a2=0 a3=0 items=0 ppid=2866 pid=2895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.181000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 12 17:27:09.184849 kubelet[2866]: I1212 17:27:09.184789 2866 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 17:27:09.186071 kubelet[2866]: I1212 17:27:09.185789 2866 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:27:09.186071 kubelet[2866]: I1212 17:27:09.186067 2866 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:27:09.186255 kubelet[2866]: I1212 17:27:09.186098 2866 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:27:09.186000 audit[2896]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2896 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:09.186000 audit[2896]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff03b1b10 a2=0 a3=0 items=0 ppid=2866 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.186000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 17:27:09.190244 kubelet[2866]: I1212 17:27:09.188798 2866 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 17:27:09.190244 kubelet[2866]: I1212 17:27:09.188832 2866 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 17:27:09.190244 kubelet[2866]: I1212 17:27:09.188866 2866 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:27:09.190244 kubelet[2866]: I1212 17:27:09.188880 2866 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 17:27:09.190244 kubelet[2866]: E1212 17:27:09.188953 2866 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:27:09.190713 kubelet[2866]: I1212 17:27:09.190499 2866 policy_none.go:49] "None policy: Start" Dec 12 17:27:09.190713 kubelet[2866]: I1212 17:27:09.190534 2866 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:27:09.190713 kubelet[2866]: I1212 17:27:09.190558 2866 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:27:09.191000 audit[2897]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2897 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:09.191000 audit[2897]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe76ee670 a2=0 a3=0 items=0 ppid=2866 pid=2897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.191000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 17:27:09.194000 audit[2898]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2898 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:09.194000 audit[2898]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcd5b5960 a2=0 a3=0 items=0 ppid=2866 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.194000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 17:27:09.197000 audit[2900]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2900 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:09.197000 audit[2900]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc9ec3fa0 a2=0 a3=0 items=0 ppid=2866 pid=2900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.197000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 17:27:09.197000 audit[2899]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2899 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:09.197000 audit[2899]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc8c66bd0 a2=0 a3=0 items=0 ppid=2866 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.197000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 17:27:09.199981 kubelet[2866]: E1212 17:27:09.199883 2866 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.17.228:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.228:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 17:27:09.199000 audit[2901]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2901 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:09.199000 audit[2901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff46b8760 a2=0 a3=0 items=0 ppid=2866 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.199000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 17:27:09.203000 audit[2902]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2902 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:09.203000 audit[2902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff72b6040 a2=0 a3=0 items=0 ppid=2866 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.203000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 17:27:09.209796 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 17:27:09.221504 kubelet[2866]: E1212 17:27:09.221417 2866 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-228\" not found" Dec 12 17:27:09.229723 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 17:27:09.239433 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 17:27:09.252453 kubelet[2866]: E1212 17:27:09.252412 2866 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:27:09.253505 kubelet[2866]: I1212 17:27:09.253471 2866 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:27:09.253786 kubelet[2866]: I1212 17:27:09.253714 2866 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:27:09.257028 kubelet[2866]: I1212 17:27:09.256709 2866 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:27:09.259401 kubelet[2866]: E1212 17:27:09.259195 2866 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:27:09.259401 kubelet[2866]: E1212 17:27:09.259323 2866 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-228\" not found" Dec 12 17:27:09.314499 kubelet[2866]: E1212 17:27:09.314422 2866 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-228?timeout=10s\": dial tcp 172.31.17.228:6443: connect: connection refused" interval="400ms" Dec 12 17:27:09.317069 systemd[1]: Created slice kubepods-burstable-podacc61a7e435ab639eec35a9fb1aef45d.slice - libcontainer container kubepods-burstable-podacc61a7e435ab639eec35a9fb1aef45d.slice. Dec 12 17:27:09.321970 kubelet[2866]: I1212 17:27:09.321917 2866 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/acc61a7e435ab639eec35a9fb1aef45d-ca-certs\") pod \"kube-apiserver-ip-172-31-17-228\" (UID: \"acc61a7e435ab639eec35a9fb1aef45d\") " pod="kube-system/kube-apiserver-ip-172-31-17-228" Dec 12 17:27:09.322125 kubelet[2866]: I1212 17:27:09.321986 2866 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/acc61a7e435ab639eec35a9fb1aef45d-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-228\" (UID: \"acc61a7e435ab639eec35a9fb1aef45d\") " pod="kube-system/kube-apiserver-ip-172-31-17-228" Dec 12 17:27:09.322125 kubelet[2866]: I1212 17:27:09.322025 2866 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/baf5aa8dcab57805a7d0f2676c6acd30-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-228\" (UID: \"baf5aa8dcab57805a7d0f2676c6acd30\") " pod="kube-system/kube-controller-manager-ip-172-31-17-228" Dec 12 17:27:09.322125 kubelet[2866]: I1212 17:27:09.322062 2866 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/baf5aa8dcab57805a7d0f2676c6acd30-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-228\" (UID: \"baf5aa8dcab57805a7d0f2676c6acd30\") " pod="kube-system/kube-controller-manager-ip-172-31-17-228" Dec 12 17:27:09.322125 kubelet[2866]: I1212 17:27:09.322103 2866 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/baf5aa8dcab57805a7d0f2676c6acd30-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-228\" (UID: \"baf5aa8dcab57805a7d0f2676c6acd30\") " pod="kube-system/kube-controller-manager-ip-172-31-17-228" Dec 12 17:27:09.322331 kubelet[2866]: I1212 17:27:09.322149 2866 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fb49638034ba188aa388bee0d3be3adf-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-228\" (UID: \"fb49638034ba188aa388bee0d3be3adf\") " pod="kube-system/kube-scheduler-ip-172-31-17-228" Dec 12 17:27:09.322331 kubelet[2866]: I1212 17:27:09.322188 2866 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/acc61a7e435ab639eec35a9fb1aef45d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-228\" (UID: \"acc61a7e435ab639eec35a9fb1aef45d\") " pod="kube-system/kube-apiserver-ip-172-31-17-228" Dec 12 17:27:09.322331 kubelet[2866]: I1212 17:27:09.322228 2866 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/baf5aa8dcab57805a7d0f2676c6acd30-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-228\" (UID: \"baf5aa8dcab57805a7d0f2676c6acd30\") " pod="kube-system/kube-controller-manager-ip-172-31-17-228" Dec 12 17:27:09.322331 kubelet[2866]: I1212 17:27:09.322264 2866 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/baf5aa8dcab57805a7d0f2676c6acd30-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-228\" (UID: \"baf5aa8dcab57805a7d0f2676c6acd30\") " pod="kube-system/kube-controller-manager-ip-172-31-17-228" Dec 12 17:27:09.330773 kubelet[2866]: E1212 17:27:09.330713 2866 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-228\" not found" node="ip-172-31-17-228" Dec 12 17:27:09.339163 systemd[1]: Created slice kubepods-burstable-podbaf5aa8dcab57805a7d0f2676c6acd30.slice - libcontainer container kubepods-burstable-podbaf5aa8dcab57805a7d0f2676c6acd30.slice. Dec 12 17:27:09.345519 kubelet[2866]: E1212 17:27:09.344287 2866 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-228\" not found" node="ip-172-31-17-228" Dec 12 17:27:09.351615 systemd[1]: Created slice kubepods-burstable-podfb49638034ba188aa388bee0d3be3adf.slice - libcontainer container kubepods-burstable-podfb49638034ba188aa388bee0d3be3adf.slice. Dec 12 17:27:09.356435 kubelet[2866]: E1212 17:27:09.356360 2866 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-228\" not found" node="ip-172-31-17-228" Dec 12 17:27:09.359840 kubelet[2866]: I1212 17:27:09.359478 2866 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-228" Dec 12 17:27:09.360509 kubelet[2866]: E1212 17:27:09.360429 2866 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.228:6443/api/v1/nodes\": dial tcp 172.31.17.228:6443: connect: connection refused" node="ip-172-31-17-228" Dec 12 17:27:09.563389 kubelet[2866]: I1212 17:27:09.563233 2866 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-228" Dec 12 17:27:09.564749 kubelet[2866]: E1212 17:27:09.564699 2866 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.228:6443/api/v1/nodes\": dial tcp 172.31.17.228:6443: connect: connection refused" node="ip-172-31-17-228" Dec 12 17:27:09.633060 containerd[1911]: time="2025-12-12T17:27:09.632792299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-228,Uid:acc61a7e435ab639eec35a9fb1aef45d,Namespace:kube-system,Attempt:0,}" Dec 12 17:27:09.645887 containerd[1911]: time="2025-12-12T17:27:09.645763663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-228,Uid:baf5aa8dcab57805a7d0f2676c6acd30,Namespace:kube-system,Attempt:0,}" Dec 12 17:27:09.659269 containerd[1911]: time="2025-12-12T17:27:09.659155603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-228,Uid:fb49638034ba188aa388bee0d3be3adf,Namespace:kube-system,Attempt:0,}" Dec 12 17:27:09.691697 containerd[1911]: time="2025-12-12T17:27:09.691512007Z" level=info msg="connecting to shim 1ef074bb19f9438b594cb22e72c2d881277d1c9f0f00380ffb527c45e72ed7d6" address="unix:///run/containerd/s/a15b0a03ba7647036df5e90e1f80ebbc8c13827dd65d26ced1d7747cbfad3e4b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:09.715683 kubelet[2866]: E1212 17:27:09.715581 2866 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-228?timeout=10s\": dial tcp 172.31.17.228:6443: connect: connection refused" interval="800ms" Dec 12 17:27:09.742992 containerd[1911]: time="2025-12-12T17:27:09.742885411Z" level=info msg="connecting to shim ecc0713f4452cb01518b93d2a10976bb1ee3ccc3d64c74cbbfcee1a71177c039" address="unix:///run/containerd/s/66f7a9c42dae5bb5888110d5e88fc64d6ef76523eb959ad8b564477de9096fe0" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:09.773447 containerd[1911]: time="2025-12-12T17:27:09.773344136Z" level=info msg="connecting to shim 9208360a0a5479dcbbea6d6565672ef270ce58f639d1fd6e0187503d9e20b4fb" address="unix:///run/containerd/s/922d4b1877b21ea61c091444366624dc4fd525f94357720e7652aaa90e7ec905" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:09.798043 systemd[1]: Started cri-containerd-1ef074bb19f9438b594cb22e72c2d881277d1c9f0f00380ffb527c45e72ed7d6.scope - libcontainer container 1ef074bb19f9438b594cb22e72c2d881277d1c9f0f00380ffb527c45e72ed7d6. Dec 12 17:27:09.833017 systemd[1]: Started cri-containerd-ecc0713f4452cb01518b93d2a10976bb1ee3ccc3d64c74cbbfcee1a71177c039.scope - libcontainer container ecc0713f4452cb01518b93d2a10976bb1ee3ccc3d64c74cbbfcee1a71177c039. Dec 12 17:27:09.844000 audit: BPF prog-id=89 op=LOAD Dec 12 17:27:09.848000 audit: BPF prog-id=90 op=LOAD Dec 12 17:27:09.848000 audit[2928]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=2912 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165663037346262313966393433386235393463623232653732633264 Dec 12 17:27:09.850000 audit: BPF prog-id=90 op=UNLOAD Dec 12 17:27:09.850000 audit[2928]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2912 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165663037346262313966393433386235393463623232653732633264 Dec 12 17:27:09.850000 audit: BPF prog-id=91 op=LOAD Dec 12 17:27:09.850000 audit[2928]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=2912 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165663037346262313966393433386235393463623232653732633264 Dec 12 17:27:09.851000 audit: BPF prog-id=92 op=LOAD Dec 12 17:27:09.851000 audit[2928]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=2912 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165663037346262313966393433386235393463623232653732633264 Dec 12 17:27:09.851000 audit: BPF prog-id=92 op=UNLOAD Dec 12 17:27:09.851000 audit[2928]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2912 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165663037346262313966393433386235393463623232653732633264 Dec 12 17:27:09.851000 audit: BPF prog-id=91 op=UNLOAD Dec 12 17:27:09.851000 audit[2928]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2912 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165663037346262313966393433386235393463623232653732633264 Dec 12 17:27:09.853000 audit: BPF prog-id=93 op=LOAD Dec 12 17:27:09.853000 audit[2928]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=2912 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165663037346262313966393433386235393463623232653732633264 Dec 12 17:27:09.878026 systemd[1]: Started cri-containerd-9208360a0a5479dcbbea6d6565672ef270ce58f639d1fd6e0187503d9e20b4fb.scope - libcontainer container 9208360a0a5479dcbbea6d6565672ef270ce58f639d1fd6e0187503d9e20b4fb. Dec 12 17:27:09.883000 audit: BPF prog-id=94 op=LOAD Dec 12 17:27:09.885000 audit: BPF prog-id=95 op=LOAD Dec 12 17:27:09.885000 audit[2965]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=2934 pid=2965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563633037313366343435326362303135313862393364326131303937 Dec 12 17:27:09.885000 audit: BPF prog-id=95 op=UNLOAD Dec 12 17:27:09.885000 audit[2965]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=2965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563633037313366343435326362303135313862393364326131303937 Dec 12 17:27:09.885000 audit: BPF prog-id=96 op=LOAD Dec 12 17:27:09.885000 audit[2965]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=2934 pid=2965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563633037313366343435326362303135313862393364326131303937 Dec 12 17:27:09.887000 audit: BPF prog-id=97 op=LOAD Dec 12 17:27:09.887000 audit[2965]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=2934 pid=2965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563633037313366343435326362303135313862393364326131303937 Dec 12 17:27:09.887000 audit: BPF prog-id=97 op=UNLOAD Dec 12 17:27:09.887000 audit[2965]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=2965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563633037313366343435326362303135313862393364326131303937 Dec 12 17:27:09.887000 audit: BPF prog-id=96 op=UNLOAD Dec 12 17:27:09.887000 audit[2965]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=2965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563633037313366343435326362303135313862393364326131303937 Dec 12 17:27:09.887000 audit: BPF prog-id=98 op=LOAD Dec 12 17:27:09.887000 audit[2965]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=2934 pid=2965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563633037313366343435326362303135313862393364326131303937 Dec 12 17:27:09.932000 audit: BPF prog-id=99 op=LOAD Dec 12 17:27:09.934000 audit: BPF prog-id=100 op=LOAD Dec 12 17:27:09.934000 audit[2989]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2968 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303833363061306135343739646362626561366436353635363732 Dec 12 17:27:09.934000 audit: BPF prog-id=100 op=UNLOAD Dec 12 17:27:09.934000 audit[2989]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2968 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303833363061306135343739646362626561366436353635363732 Dec 12 17:27:09.934000 audit: BPF prog-id=101 op=LOAD Dec 12 17:27:09.934000 audit[2989]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2968 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303833363061306135343739646362626561366436353635363732 Dec 12 17:27:09.934000 audit: BPF prog-id=102 op=LOAD Dec 12 17:27:09.934000 audit[2989]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2968 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303833363061306135343739646362626561366436353635363732 Dec 12 17:27:09.935000 audit: BPF prog-id=102 op=UNLOAD Dec 12 17:27:09.935000 audit[2989]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2968 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303833363061306135343739646362626561366436353635363732 Dec 12 17:27:09.935000 audit: BPF prog-id=101 op=UNLOAD Dec 12 17:27:09.935000 audit[2989]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2968 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303833363061306135343739646362626561366436353635363732 Dec 12 17:27:09.935000 audit: BPF prog-id=103 op=LOAD Dec 12 17:27:09.935000 audit[2989]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2968 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:09.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303833363061306135343739646362626561366436353635363732 Dec 12 17:27:09.975953 kubelet[2866]: I1212 17:27:09.975903 2866 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-228" Dec 12 17:27:09.976784 kubelet[2866]: E1212 17:27:09.976715 2866 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.228:6443/api/v1/nodes\": dial tcp 172.31.17.228:6443: connect: connection refused" node="ip-172-31-17-228" Dec 12 17:27:09.977029 containerd[1911]: time="2025-12-12T17:27:09.976916457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-228,Uid:acc61a7e435ab639eec35a9fb1aef45d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ef074bb19f9438b594cb22e72c2d881277d1c9f0f00380ffb527c45e72ed7d6\"" Dec 12 17:27:09.980310 kubelet[2866]: E1212 17:27:09.979704 2866 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.17.228:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.228:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:27:09.992936 containerd[1911]: time="2025-12-12T17:27:09.992629137Z" level=info msg="CreateContainer within sandbox \"1ef074bb19f9438b594cb22e72c2d881277d1c9f0f00380ffb527c45e72ed7d6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 17:27:09.997033 containerd[1911]: time="2025-12-12T17:27:09.996965397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-228,Uid:baf5aa8dcab57805a7d0f2676c6acd30,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecc0713f4452cb01518b93d2a10976bb1ee3ccc3d64c74cbbfcee1a71177c039\"" Dec 12 17:27:10.010878 containerd[1911]: time="2025-12-12T17:27:10.010814873Z" level=info msg="CreateContainer within sandbox \"ecc0713f4452cb01518b93d2a10976bb1ee3ccc3d64c74cbbfcee1a71177c039\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 17:27:10.039705 containerd[1911]: time="2025-12-12T17:27:10.038639441Z" level=info msg="Container 5c05203a8c9697d4c059060e89bab65e1888830de3a247e6042b1b51acb93ffc: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:27:10.043951 containerd[1911]: time="2025-12-12T17:27:10.043892657Z" level=info msg="Container 61ff1feafc52636fba3fb2f695b8b251f972191d7edb2a5ff71e4758843b51a2: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:27:10.044833 containerd[1911]: time="2025-12-12T17:27:10.044789129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-228,Uid:fb49638034ba188aa388bee0d3be3adf,Namespace:kube-system,Attempt:0,} returns sandbox id \"9208360a0a5479dcbbea6d6565672ef270ce58f639d1fd6e0187503d9e20b4fb\"" Dec 12 17:27:10.057103 containerd[1911]: time="2025-12-12T17:27:10.057033785Z" level=info msg="CreateContainer within sandbox \"1ef074bb19f9438b594cb22e72c2d881277d1c9f0f00380ffb527c45e72ed7d6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5c05203a8c9697d4c059060e89bab65e1888830de3a247e6042b1b51acb93ffc\"" Dec 12 17:27:10.058149 containerd[1911]: time="2025-12-12T17:27:10.058093145Z" level=info msg="StartContainer for \"5c05203a8c9697d4c059060e89bab65e1888830de3a247e6042b1b51acb93ffc\"" Dec 12 17:27:10.059243 containerd[1911]: time="2025-12-12T17:27:10.059178761Z" level=info msg="CreateContainer within sandbox \"9208360a0a5479dcbbea6d6565672ef270ce58f639d1fd6e0187503d9e20b4fb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 17:27:10.060596 containerd[1911]: time="2025-12-12T17:27:10.060521369Z" level=info msg="connecting to shim 5c05203a8c9697d4c059060e89bab65e1888830de3a247e6042b1b51acb93ffc" address="unix:///run/containerd/s/a15b0a03ba7647036df5e90e1f80ebbc8c13827dd65d26ced1d7747cbfad3e4b" protocol=ttrpc version=3 Dec 12 17:27:10.067860 containerd[1911]: time="2025-12-12T17:27:10.067791821Z" level=info msg="CreateContainer within sandbox \"ecc0713f4452cb01518b93d2a10976bb1ee3ccc3d64c74cbbfcee1a71177c039\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"61ff1feafc52636fba3fb2f695b8b251f972191d7edb2a5ff71e4758843b51a2\"" Dec 12 17:27:10.070063 containerd[1911]: time="2025-12-12T17:27:10.070005137Z" level=info msg="StartContainer for \"61ff1feafc52636fba3fb2f695b8b251f972191d7edb2a5ff71e4758843b51a2\"" Dec 12 17:27:10.073103 containerd[1911]: time="2025-12-12T17:27:10.073039373Z" level=info msg="connecting to shim 61ff1feafc52636fba3fb2f695b8b251f972191d7edb2a5ff71e4758843b51a2" address="unix:///run/containerd/s/66f7a9c42dae5bb5888110d5e88fc64d6ef76523eb959ad8b564477de9096fe0" protocol=ttrpc version=3 Dec 12 17:27:10.084598 containerd[1911]: time="2025-12-12T17:27:10.084326981Z" level=info msg="Container 755f8cecce3f6011efbc588147c4a7ad5747677178bb59031d7a48e9c6d4d28c: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:27:10.107475 containerd[1911]: time="2025-12-12T17:27:10.107401505Z" level=info msg="CreateContainer within sandbox \"9208360a0a5479dcbbea6d6565672ef270ce58f639d1fd6e0187503d9e20b4fb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"755f8cecce3f6011efbc588147c4a7ad5747677178bb59031d7a48e9c6d4d28c\"" Dec 12 17:27:10.110572 containerd[1911]: time="2025-12-12T17:27:10.109442669Z" level=info msg="StartContainer for \"755f8cecce3f6011efbc588147c4a7ad5747677178bb59031d7a48e9c6d4d28c\"" Dec 12 17:27:10.109994 systemd[1]: Started cri-containerd-5c05203a8c9697d4c059060e89bab65e1888830de3a247e6042b1b51acb93ffc.scope - libcontainer container 5c05203a8c9697d4c059060e89bab65e1888830de3a247e6042b1b51acb93ffc. Dec 12 17:27:10.128015 containerd[1911]: time="2025-12-12T17:27:10.127957937Z" level=info msg="connecting to shim 755f8cecce3f6011efbc588147c4a7ad5747677178bb59031d7a48e9c6d4d28c" address="unix:///run/containerd/s/922d4b1877b21ea61c091444366624dc4fd525f94357720e7652aaa90e7ec905" protocol=ttrpc version=3 Dec 12 17:27:10.147316 systemd[1]: Started cri-containerd-61ff1feafc52636fba3fb2f695b8b251f972191d7edb2a5ff71e4758843b51a2.scope - libcontainer container 61ff1feafc52636fba3fb2f695b8b251f972191d7edb2a5ff71e4758843b51a2. Dec 12 17:27:10.150417 kubelet[2866]: E1212 17:27:10.149720 2866 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.17.228:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-228&limit=500&resourceVersion=0\": dial tcp 172.31.17.228:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 17:27:10.169000 audit: BPF prog-id=104 op=LOAD Dec 12 17:27:10.171000 audit: BPF prog-id=105 op=LOAD Dec 12 17:27:10.171000 audit[3045]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=2912 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:10.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563303532303361386339363937643463303539303630653839626162 Dec 12 17:27:10.172000 audit: BPF prog-id=105 op=UNLOAD Dec 12 17:27:10.172000 audit[3045]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2912 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:10.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563303532303361386339363937643463303539303630653839626162 Dec 12 17:27:10.172000 audit: BPF prog-id=106 op=LOAD Dec 12 17:27:10.172000 audit[3045]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=2912 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:10.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563303532303361386339363937643463303539303630653839626162 Dec 12 17:27:10.172000 audit: BPF prog-id=107 op=LOAD Dec 12 17:27:10.172000 audit[3045]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=2912 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:10.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563303532303361386339363937643463303539303630653839626162 Dec 12 17:27:10.172000 audit: BPF prog-id=107 op=UNLOAD Dec 12 17:27:10.172000 audit[3045]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2912 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:10.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563303532303361386339363937643463303539303630653839626162 Dec 12 17:27:10.172000 audit: BPF prog-id=106 op=UNLOAD Dec 12 17:27:10.172000 audit[3045]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2912 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:10.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563303532303361386339363937643463303539303630653839626162 Dec 12 17:27:10.172000 audit: BPF prog-id=108 op=LOAD Dec 12 17:27:10.172000 audit[3045]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=2912 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:10.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563303532303361386339363937643463303539303630653839626162 Dec 12 17:27:10.188330 systemd[1]: Started cri-containerd-755f8cecce3f6011efbc588147c4a7ad5747677178bb59031d7a48e9c6d4d28c.scope - libcontainer container 755f8cecce3f6011efbc588147c4a7ad5747677178bb59031d7a48e9c6d4d28c. Dec 12 17:27:10.207000 audit: BPF prog-id=109 op=LOAD Dec 12 17:27:10.210000 audit: BPF prog-id=110 op=LOAD Dec 12 17:27:10.210000 audit[3053]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2934 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:10.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631666631666561666335323633366662613366623266363935623862 Dec 12 17:27:10.211000 audit: BPF prog-id=110 op=UNLOAD Dec 12 17:27:10.211000 audit[3053]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:10.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631666631666561666335323633366662613366623266363935623862 Dec 12 17:27:10.214000 audit: BPF prog-id=111 op=LOAD Dec 12 17:27:10.214000 audit[3053]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2934 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:10.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631666631666561666335323633366662613366623266363935623862 Dec 12 17:27:10.214000 audit: BPF prog-id=112 op=LOAD Dec 12 17:27:10.214000 audit[3053]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2934 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:10.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631666631666561666335323633366662613366623266363935623862 Dec 12 17:27:10.216000 audit: BPF prog-id=112 op=UNLOAD Dec 12 17:27:10.216000 audit[3053]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:10.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631666631666561666335323633366662613366623266363935623862 Dec 12 17:27:10.220000 audit: BPF prog-id=111 op=UNLOAD Dec 12 17:27:10.220000 audit[3053]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:10.220000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631666631666561666335323633366662613366623266363935623862 Dec 12 17:27:10.220000 audit: BPF prog-id=113 op=LOAD Dec 12 17:27:10.220000 audit[3053]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2934 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:10.220000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631666631666561666335323633366662613366623266363935623862 Dec 12 17:27:10.244235 kubelet[2866]: E1212 17:27:10.244136 2866 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.17.228:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.228:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 17:27:10.261000 audit: BPF prog-id=114 op=LOAD Dec 12 17:27:10.265000 audit: BPF prog-id=115 op=LOAD Dec 12 17:27:10.265000 audit[3077]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2968 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:10.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735356638636563636533663630313165666263353838313437633461 Dec 12 17:27:10.265000 audit: BPF prog-id=115 op=UNLOAD Dec 12 17:27:10.265000 audit[3077]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2968 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:10.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735356638636563636533663630313165666263353838313437633461 Dec 12 17:27:10.269000 audit: BPF prog-id=116 op=LOAD Dec 12 17:27:10.269000 audit[3077]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2968 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:10.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735356638636563636533663630313165666263353838313437633461 Dec 12 17:27:10.271000 audit: BPF prog-id=117 op=LOAD Dec 12 17:27:10.271000 audit[3077]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2968 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:10.271000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735356638636563636533663630313165666263353838313437633461 Dec 12 17:27:10.273000 audit: BPF prog-id=117 op=UNLOAD Dec 12 17:27:10.273000 audit[3077]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2968 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:10.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735356638636563636533663630313165666263353838313437633461 Dec 12 17:27:10.273000 audit: BPF prog-id=116 op=UNLOAD Dec 12 17:27:10.273000 audit[3077]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2968 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:10.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735356638636563636533663630313165666263353838313437633461 Dec 12 17:27:10.275000 audit: BPF prog-id=118 op=LOAD Dec 12 17:27:10.275000 audit[3077]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2968 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:10.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735356638636563636533663630313165666263353838313437633461 Dec 12 17:27:10.304445 containerd[1911]: time="2025-12-12T17:27:10.304374702Z" level=info msg="StartContainer for \"5c05203a8c9697d4c059060e89bab65e1888830de3a247e6042b1b51acb93ffc\" returns successfully" Dec 12 17:27:10.366968 containerd[1911]: time="2025-12-12T17:27:10.366782358Z" level=info msg="StartContainer for \"61ff1feafc52636fba3fb2f695b8b251f972191d7edb2a5ff71e4758843b51a2\" returns successfully" Dec 12 17:27:10.396957 containerd[1911]: time="2025-12-12T17:27:10.396820375Z" level=info msg="StartContainer for \"755f8cecce3f6011efbc588147c4a7ad5747677178bb59031d7a48e9c6d4d28c\" returns successfully" Dec 12 17:27:10.518060 kubelet[2866]: E1212 17:27:10.517988 2866 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-228?timeout=10s\": dial tcp 172.31.17.228:6443: connect: connection refused" interval="1.6s" Dec 12 17:27:10.783314 kubelet[2866]: I1212 17:27:10.782032 2866 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-228" Dec 12 17:27:11.290333 kubelet[2866]: E1212 17:27:11.290257 2866 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-228\" not found" node="ip-172-31-17-228" Dec 12 17:27:11.298566 kubelet[2866]: E1212 17:27:11.298447 2866 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-228\" not found" node="ip-172-31-17-228" Dec 12 17:27:11.308402 kubelet[2866]: E1212 17:27:11.308348 2866 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-228\" not found" node="ip-172-31-17-228" Dec 12 17:27:12.311701 kubelet[2866]: E1212 17:27:12.310621 2866 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-228\" not found" node="ip-172-31-17-228" Dec 12 17:27:12.312753 kubelet[2866]: E1212 17:27:12.312580 2866 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-228\" not found" node="ip-172-31-17-228" Dec 12 17:27:12.313404 kubelet[2866]: E1212 17:27:12.313139 2866 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-228\" not found" node="ip-172-31-17-228" Dec 12 17:27:13.315723 kubelet[2866]: E1212 17:27:13.313800 2866 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-228\" not found" node="ip-172-31-17-228" Dec 12 17:27:13.315723 kubelet[2866]: E1212 17:27:13.314402 2866 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-228\" not found" node="ip-172-31-17-228" Dec 12 17:27:14.316231 kubelet[2866]: E1212 17:27:14.315986 2866 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-228\" not found" node="ip-172-31-17-228" Dec 12 17:27:14.692693 kubelet[2866]: I1212 17:27:14.691838 2866 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-228" Dec 12 17:27:14.692693 kubelet[2866]: E1212 17:27:14.691899 2866 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-17-228\": node \"ip-172-31-17-228\" not found" Dec 12 17:27:14.712797 kubelet[2866]: I1212 17:27:14.712735 2866 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-228" Dec 12 17:27:14.798693 kubelet[2866]: E1212 17:27:14.797388 2866 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-17-228.188087db1d20ed04 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-228,UID:ip-172-31-17-228,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-228,},FirstTimestamp:2025-12-12 17:27:09.085519108 +0000 UTC m=+2.023588679,LastTimestamp:2025-12-12 17:27:09.085519108 +0000 UTC m=+2.023588679,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-228,}" Dec 12 17:27:14.832540 kubelet[2866]: E1212 17:27:14.832310 2866 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-228\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-17-228" Dec 12 17:27:14.832540 kubelet[2866]: I1212 17:27:14.832365 2866 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-228" Dec 12 17:27:14.847301 kubelet[2866]: E1212 17:27:14.847076 2866 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-228\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-17-228" Dec 12 17:27:14.847301 kubelet[2866]: I1212 17:27:14.847124 2866 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-228" Dec 12 17:27:14.857307 kubelet[2866]: E1212 17:27:14.857131 2866 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-228\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-17-228" Dec 12 17:27:15.087632 kubelet[2866]: I1212 17:27:15.087257 2866 apiserver.go:52] "Watching apiserver" Dec 12 17:27:15.112544 kubelet[2866]: I1212 17:27:15.112482 2866 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:27:15.937921 update_engine[1884]: I20251212 17:27:15.937837 1884 update_attempter.cc:509] Updating boot flags... Dec 12 17:27:17.023341 kubelet[2866]: I1212 17:27:17.023230 2866 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-228" Dec 12 17:27:17.567757 systemd[1]: Reload requested from client PID 3421 ('systemctl') (unit session-7.scope)... Dec 12 17:27:17.567857 systemd[1]: Reloading... Dec 12 17:27:17.888719 zram_generator::config[3471]: No configuration found. Dec 12 17:27:18.526311 systemd[1]: Reloading finished in 957 ms. Dec 12 17:27:18.568105 kubelet[2866]: I1212 17:27:18.567751 2866 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:27:18.568396 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:27:18.591937 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:27:18.593749 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:27:18.593877 systemd[1]: kubelet.service: Consumed 2.952s CPU time, 127.1M memory peak. Dec 12 17:27:18.601272 kernel: kauditd_printk_skb: 158 callbacks suppressed Dec 12 17:27:18.601417 kernel: audit: type=1131 audit(1765560438.593:412): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:18.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:18.599142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:27:18.602000 audit: BPF prog-id=119 op=LOAD Dec 12 17:27:18.602000 audit: BPF prog-id=86 op=UNLOAD Dec 12 17:27:18.604777 kernel: audit: type=1334 audit(1765560438.602:413): prog-id=119 op=LOAD Dec 12 17:27:18.605000 audit: BPF prog-id=120 op=LOAD Dec 12 17:27:18.606736 kernel: audit: type=1334 audit(1765560438.602:414): prog-id=86 op=UNLOAD Dec 12 17:27:18.609793 kernel: audit: type=1334 audit(1765560438.605:415): prog-id=120 op=LOAD Dec 12 17:27:18.609902 kernel: audit: type=1334 audit(1765560438.606:416): prog-id=121 op=LOAD Dec 12 17:27:18.606000 audit: BPF prog-id=121 op=LOAD Dec 12 17:27:18.606000 audit: BPF prog-id=87 op=UNLOAD Dec 12 17:27:18.611610 kernel: audit: type=1334 audit(1765560438.606:417): prog-id=87 op=UNLOAD Dec 12 17:27:18.606000 audit: BPF prog-id=88 op=UNLOAD Dec 12 17:27:18.613372 kernel: audit: type=1334 audit(1765560438.606:418): prog-id=88 op=UNLOAD Dec 12 17:27:18.614000 audit: BPF prog-id=122 op=LOAD Dec 12 17:27:18.614000 audit: BPF prog-id=70 op=UNLOAD Dec 12 17:27:18.618010 kernel: audit: type=1334 audit(1765560438.614:419): prog-id=122 op=LOAD Dec 12 17:27:18.618133 kernel: audit: type=1334 audit(1765560438.614:420): prog-id=70 op=UNLOAD Dec 12 17:27:18.618000 audit: BPF prog-id=123 op=LOAD Dec 12 17:27:18.618000 audit: BPF prog-id=82 op=UNLOAD Dec 12 17:27:18.619000 audit: BPF prog-id=124 op=LOAD Dec 12 17:27:18.620799 kernel: audit: type=1334 audit(1765560438.618:421): prog-id=123 op=LOAD Dec 12 17:27:18.619000 audit: BPF prog-id=69 op=UNLOAD Dec 12 17:27:18.624000 audit: BPF prog-id=125 op=LOAD Dec 12 17:27:18.627000 audit: BPF prog-id=74 op=UNLOAD Dec 12 17:27:18.627000 audit: BPF prog-id=126 op=LOAD Dec 12 17:27:18.627000 audit: BPF prog-id=127 op=LOAD Dec 12 17:27:18.627000 audit: BPF prog-id=75 op=UNLOAD Dec 12 17:27:18.627000 audit: BPF prog-id=76 op=UNLOAD Dec 12 17:27:18.633000 audit: BPF prog-id=128 op=LOAD Dec 12 17:27:18.633000 audit: BPF prog-id=83 op=UNLOAD Dec 12 17:27:18.634000 audit: BPF prog-id=129 op=LOAD Dec 12 17:27:18.634000 audit: BPF prog-id=130 op=LOAD Dec 12 17:27:18.634000 audit: BPF prog-id=84 op=UNLOAD Dec 12 17:27:18.634000 audit: BPF prog-id=85 op=UNLOAD Dec 12 17:27:18.636000 audit: BPF prog-id=131 op=LOAD Dec 12 17:27:18.637000 audit: BPF prog-id=77 op=UNLOAD Dec 12 17:27:18.637000 audit: BPF prog-id=132 op=LOAD Dec 12 17:27:18.637000 audit: BPF prog-id=133 op=LOAD Dec 12 17:27:18.637000 audit: BPF prog-id=78 op=UNLOAD Dec 12 17:27:18.637000 audit: BPF prog-id=79 op=UNLOAD Dec 12 17:27:18.638000 audit: BPF prog-id=134 op=LOAD Dec 12 17:27:18.638000 audit: BPF prog-id=71 op=UNLOAD Dec 12 17:27:18.639000 audit: BPF prog-id=135 op=LOAD Dec 12 17:27:18.639000 audit: BPF prog-id=136 op=LOAD Dec 12 17:27:18.639000 audit: BPF prog-id=72 op=UNLOAD Dec 12 17:27:18.639000 audit: BPF prog-id=73 op=UNLOAD Dec 12 17:27:18.640000 audit: BPF prog-id=137 op=LOAD Dec 12 17:27:18.640000 audit: BPF prog-id=138 op=LOAD Dec 12 17:27:18.640000 audit: BPF prog-id=80 op=UNLOAD Dec 12 17:27:18.640000 audit: BPF prog-id=81 op=UNLOAD Dec 12 17:27:19.002595 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:27:19.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:19.022056 (kubelet)[3528]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:27:19.126512 kubelet[3528]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:27:19.129047 kubelet[3528]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:27:19.129047 kubelet[3528]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:27:19.129047 kubelet[3528]: I1212 17:27:19.127306 3528 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:27:19.144629 kubelet[3528]: I1212 17:27:19.144552 3528 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 17:27:19.144629 kubelet[3528]: I1212 17:27:19.144611 3528 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:27:19.145797 kubelet[3528]: I1212 17:27:19.145732 3528 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:27:19.154840 kubelet[3528]: I1212 17:27:19.154762 3528 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 17:27:19.166057 kubelet[3528]: I1212 17:27:19.165988 3528 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:27:19.181591 kubelet[3528]: I1212 17:27:19.181394 3528 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:27:19.194687 kubelet[3528]: I1212 17:27:19.194524 3528 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:27:19.195367 kubelet[3528]: I1212 17:27:19.195248 3528 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:27:19.195890 kubelet[3528]: I1212 17:27:19.195551 3528 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-228","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:27:19.196206 kubelet[3528]: I1212 17:27:19.196175 3528 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:27:19.196314 kubelet[3528]: I1212 17:27:19.196295 3528 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 17:27:19.196539 kubelet[3528]: I1212 17:27:19.196513 3528 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:27:19.197062 kubelet[3528]: I1212 17:27:19.197024 3528 kubelet.go:480] "Attempting to sync node with API server" Dec 12 17:27:19.197290 kubelet[3528]: I1212 17:27:19.197261 3528 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:27:19.197520 kubelet[3528]: I1212 17:27:19.197490 3528 kubelet.go:386] "Adding apiserver pod source" Dec 12 17:27:19.197815 kubelet[3528]: I1212 17:27:19.197703 3528 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:27:19.202396 kubelet[3528]: I1212 17:27:19.202329 3528 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 17:27:19.204039 kubelet[3528]: I1212 17:27:19.203995 3528 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:27:19.208731 kubelet[3528]: I1212 17:27:19.208456 3528 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:27:19.208731 kubelet[3528]: I1212 17:27:19.208543 3528 server.go:1289] "Started kubelet" Dec 12 17:27:19.213612 kubelet[3528]: I1212 17:27:19.213566 3528 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:27:19.216689 kubelet[3528]: I1212 17:27:19.215470 3528 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:27:19.224700 kubelet[3528]: I1212 17:27:19.223200 3528 server.go:317] "Adding debug handlers to kubelet server" Dec 12 17:27:19.241455 kubelet[3528]: I1212 17:27:19.241365 3528 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:27:19.242058 kubelet[3528]: I1212 17:27:19.242012 3528 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:27:19.252500 kubelet[3528]: I1212 17:27:19.252453 3528 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:27:19.253847 kubelet[3528]: I1212 17:27:19.253359 3528 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:27:19.253847 kubelet[3528]: I1212 17:27:19.253604 3528 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:27:19.265818 kubelet[3528]: I1212 17:27:19.265762 3528 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:27:19.266225 kubelet[3528]: E1212 17:27:19.266187 3528 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-228\" not found" Dec 12 17:27:19.269549 kubelet[3528]: I1212 17:27:19.269482 3528 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:27:19.270128 kubelet[3528]: I1212 17:27:19.270078 3528 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:27:19.279725 kubelet[3528]: I1212 17:27:19.278472 3528 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:27:19.318756 kubelet[3528]: I1212 17:27:19.318105 3528 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 17:27:19.321156 kubelet[3528]: I1212 17:27:19.321112 3528 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 17:27:19.321381 kubelet[3528]: I1212 17:27:19.321356 3528 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 17:27:19.321508 kubelet[3528]: I1212 17:27:19.321486 3528 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:27:19.321616 kubelet[3528]: I1212 17:27:19.321598 3528 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 17:27:19.321886 kubelet[3528]: E1212 17:27:19.321852 3528 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:27:19.422915 kubelet[3528]: E1212 17:27:19.422875 3528 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 17:27:19.425619 kubelet[3528]: I1212 17:27:19.424920 3528 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:27:19.425619 kubelet[3528]: I1212 17:27:19.424948 3528 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:27:19.425619 kubelet[3528]: I1212 17:27:19.424983 3528 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:27:19.425619 kubelet[3528]: I1212 17:27:19.425202 3528 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 17:27:19.425619 kubelet[3528]: I1212 17:27:19.425221 3528 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 17:27:19.425619 kubelet[3528]: I1212 17:27:19.425253 3528 policy_none.go:49] "None policy: Start" Dec 12 17:27:19.425619 kubelet[3528]: I1212 17:27:19.425272 3528 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:27:19.425619 kubelet[3528]: I1212 17:27:19.425293 3528 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:27:19.425619 kubelet[3528]: I1212 17:27:19.425484 3528 state_mem.go:75] "Updated machine memory state" Dec 12 17:27:19.437391 kubelet[3528]: E1212 17:27:19.437354 3528 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:27:19.439595 kubelet[3528]: I1212 17:27:19.438980 3528 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:27:19.439595 kubelet[3528]: I1212 17:27:19.439014 3528 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:27:19.439595 kubelet[3528]: I1212 17:27:19.439379 3528 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:27:19.443478 kubelet[3528]: E1212 17:27:19.443410 3528 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:27:19.556318 kubelet[3528]: I1212 17:27:19.555944 3528 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-228" Dec 12 17:27:19.573160 kubelet[3528]: I1212 17:27:19.572801 3528 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-17-228" Dec 12 17:27:19.573160 kubelet[3528]: I1212 17:27:19.572913 3528 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-228" Dec 12 17:27:19.624269 kubelet[3528]: I1212 17:27:19.624214 3528 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-228" Dec 12 17:27:19.624747 kubelet[3528]: I1212 17:27:19.624587 3528 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-228" Dec 12 17:27:19.625709 kubelet[3528]: I1212 17:27:19.625102 3528 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-228" Dec 12 17:27:19.643401 kubelet[3528]: E1212 17:27:19.643273 3528 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-228\" already exists" pod="kube-system/kube-scheduler-ip-172-31-17-228" Dec 12 17:27:19.675240 kubelet[3528]: I1212 17:27:19.675058 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/acc61a7e435ab639eec35a9fb1aef45d-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-228\" (UID: \"acc61a7e435ab639eec35a9fb1aef45d\") " pod="kube-system/kube-apiserver-ip-172-31-17-228" Dec 12 17:27:19.675602 kubelet[3528]: I1212 17:27:19.675568 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/baf5aa8dcab57805a7d0f2676c6acd30-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-228\" (UID: \"baf5aa8dcab57805a7d0f2676c6acd30\") " pod="kube-system/kube-controller-manager-ip-172-31-17-228" Dec 12 17:27:19.676368 kubelet[3528]: I1212 17:27:19.676243 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fb49638034ba188aa388bee0d3be3adf-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-228\" (UID: \"fb49638034ba188aa388bee0d3be3adf\") " pod="kube-system/kube-scheduler-ip-172-31-17-228" Dec 12 17:27:19.677741 kubelet[3528]: I1212 17:27:19.677100 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/acc61a7e435ab639eec35a9fb1aef45d-ca-certs\") pod \"kube-apiserver-ip-172-31-17-228\" (UID: \"acc61a7e435ab639eec35a9fb1aef45d\") " pod="kube-system/kube-apiserver-ip-172-31-17-228" Dec 12 17:27:19.679017 kubelet[3528]: I1212 17:27:19.678736 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/acc61a7e435ab639eec35a9fb1aef45d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-228\" (UID: \"acc61a7e435ab639eec35a9fb1aef45d\") " pod="kube-system/kube-apiserver-ip-172-31-17-228" Dec 12 17:27:19.679017 kubelet[3528]: I1212 17:27:19.678913 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/baf5aa8dcab57805a7d0f2676c6acd30-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-228\" (UID: \"baf5aa8dcab57805a7d0f2676c6acd30\") " pod="kube-system/kube-controller-manager-ip-172-31-17-228" Dec 12 17:27:19.679017 kubelet[3528]: I1212 17:27:19.678994 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/baf5aa8dcab57805a7d0f2676c6acd30-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-228\" (UID: \"baf5aa8dcab57805a7d0f2676c6acd30\") " pod="kube-system/kube-controller-manager-ip-172-31-17-228" Dec 12 17:27:19.679252 kubelet[3528]: I1212 17:27:19.679087 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/baf5aa8dcab57805a7d0f2676c6acd30-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-228\" (UID: \"baf5aa8dcab57805a7d0f2676c6acd30\") " pod="kube-system/kube-controller-manager-ip-172-31-17-228" Dec 12 17:27:19.679252 kubelet[3528]: I1212 17:27:19.679130 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/baf5aa8dcab57805a7d0f2676c6acd30-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-228\" (UID: \"baf5aa8dcab57805a7d0f2676c6acd30\") " pod="kube-system/kube-controller-manager-ip-172-31-17-228" Dec 12 17:27:20.199371 kubelet[3528]: I1212 17:27:20.199302 3528 apiserver.go:52] "Watching apiserver" Dec 12 17:27:20.270284 kubelet[3528]: I1212 17:27:20.270154 3528 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:27:20.389922 kubelet[3528]: I1212 17:27:20.389865 3528 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-228" Dec 12 17:27:20.403185 kubelet[3528]: E1212 17:27:20.403053 3528 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-228\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-228" Dec 12 17:27:20.437528 kubelet[3528]: I1212 17:27:20.437384 3528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-228" podStartSLOduration=1.437336788 podStartE2EDuration="1.437336788s" podCreationTimestamp="2025-12-12 17:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:27:20.434613268 +0000 UTC m=+1.398894727" watchObservedRunningTime="2025-12-12 17:27:20.437336788 +0000 UTC m=+1.401618211" Dec 12 17:27:20.488039 kubelet[3528]: I1212 17:27:20.486820 3528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-228" podStartSLOduration=3.486796805 podStartE2EDuration="3.486796805s" podCreationTimestamp="2025-12-12 17:27:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:27:20.462652157 +0000 UTC m=+1.426933628" watchObservedRunningTime="2025-12-12 17:27:20.486796805 +0000 UTC m=+1.451078252" Dec 12 17:27:20.600930 kubelet[3528]: I1212 17:27:20.600829 3528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-228" podStartSLOduration=1.600802457 podStartE2EDuration="1.600802457s" podCreationTimestamp="2025-12-12 17:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:27:20.488550077 +0000 UTC m=+1.452831512" watchObservedRunningTime="2025-12-12 17:27:20.600802457 +0000 UTC m=+1.565083904" Dec 12 17:27:22.573989 kubelet[3528]: I1212 17:27:22.573838 3528 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 17:27:22.575536 kubelet[3528]: I1212 17:27:22.575075 3528 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 17:27:22.575602 containerd[1911]: time="2025-12-12T17:27:22.574453459Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 17:27:23.749252 systemd[1]: Created slice kubepods-besteffort-pod7b5e2b01_238c_43d7_bc6d_56686cbe8284.slice - libcontainer container kubepods-besteffort-pod7b5e2b01_238c_43d7_bc6d_56686cbe8284.slice. Dec 12 17:27:23.808706 kubelet[3528]: I1212 17:27:23.808628 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7b5e2b01-238c-43d7-bc6d-56686cbe8284-kube-proxy\") pod \"kube-proxy-59kpj\" (UID: \"7b5e2b01-238c-43d7-bc6d-56686cbe8284\") " pod="kube-system/kube-proxy-59kpj" Dec 12 17:27:23.809283 kubelet[3528]: I1212 17:27:23.808712 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b5e2b01-238c-43d7-bc6d-56686cbe8284-xtables-lock\") pod \"kube-proxy-59kpj\" (UID: \"7b5e2b01-238c-43d7-bc6d-56686cbe8284\") " pod="kube-system/kube-proxy-59kpj" Dec 12 17:27:23.809283 kubelet[3528]: I1212 17:27:23.808757 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b5e2b01-238c-43d7-bc6d-56686cbe8284-lib-modules\") pod \"kube-proxy-59kpj\" (UID: \"7b5e2b01-238c-43d7-bc6d-56686cbe8284\") " pod="kube-system/kube-proxy-59kpj" Dec 12 17:27:23.809283 kubelet[3528]: I1212 17:27:23.808801 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dhn7\" (UniqueName: \"kubernetes.io/projected/7b5e2b01-238c-43d7-bc6d-56686cbe8284-kube-api-access-2dhn7\") pod \"kube-proxy-59kpj\" (UID: \"7b5e2b01-238c-43d7-bc6d-56686cbe8284\") " pod="kube-system/kube-proxy-59kpj" Dec 12 17:27:23.897510 systemd[1]: Created slice kubepods-besteffort-pode910d3af_f6ed_46c3_b4af_89fd07a8b3ed.slice - libcontainer container kubepods-besteffort-pode910d3af_f6ed_46c3_b4af_89fd07a8b3ed.slice. Dec 12 17:27:23.910109 kubelet[3528]: I1212 17:27:23.910029 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls49m\" (UniqueName: \"kubernetes.io/projected/e910d3af-f6ed-46c3-b4af-89fd07a8b3ed-kube-api-access-ls49m\") pod \"tigera-operator-7dcd859c48-6fjl5\" (UID: \"e910d3af-f6ed-46c3-b4af-89fd07a8b3ed\") " pod="tigera-operator/tigera-operator-7dcd859c48-6fjl5" Dec 12 17:27:23.910109 kubelet[3528]: I1212 17:27:23.910114 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e910d3af-f6ed-46c3-b4af-89fd07a8b3ed-var-lib-calico\") pod \"tigera-operator-7dcd859c48-6fjl5\" (UID: \"e910d3af-f6ed-46c3-b4af-89fd07a8b3ed\") " pod="tigera-operator/tigera-operator-7dcd859c48-6fjl5" Dec 12 17:27:24.063132 containerd[1911]: time="2025-12-12T17:27:24.062505390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-59kpj,Uid:7b5e2b01-238c-43d7-bc6d-56686cbe8284,Namespace:kube-system,Attempt:0,}" Dec 12 17:27:24.110488 containerd[1911]: time="2025-12-12T17:27:24.110343115Z" level=info msg="connecting to shim e92c8f341754560d435489d325d8c0547c7b26b3b9697f55d8ef41b41ad5cead" address="unix:///run/containerd/s/b15a5123c7fa1cb253571e92ed4de93d7dca805ed9836188615a678660cf8c76" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:24.160412 systemd[1]: Started cri-containerd-e92c8f341754560d435489d325d8c0547c7b26b3b9697f55d8ef41b41ad5cead.scope - libcontainer container e92c8f341754560d435489d325d8c0547c7b26b3b9697f55d8ef41b41ad5cead. Dec 12 17:27:24.191225 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 12 17:27:24.191380 kernel: audit: type=1334 audit(1765560444.187:454): prog-id=139 op=LOAD Dec 12 17:27:24.187000 audit: BPF prog-id=139 op=LOAD Dec 12 17:27:24.195128 kernel: audit: type=1334 audit(1765560444.191:455): prog-id=140 op=LOAD Dec 12 17:27:24.191000 audit: BPF prog-id=140 op=LOAD Dec 12 17:27:24.191000 audit[3595]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=3584 pid=3595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.202452 kernel: audit: type=1300 audit(1765560444.191:455): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=3584 pid=3595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.202819 kernel: audit: type=1327 audit(1765560444.191:455): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539326338663334313735343536306434333534383964333235643863 Dec 12 17:27:24.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539326338663334313735343536306434333534383964333235643863 Dec 12 17:27:24.192000 audit: BPF prog-id=140 op=UNLOAD Dec 12 17:27:24.210976 kernel: audit: type=1334 audit(1765560444.192:456): prog-id=140 op=UNLOAD Dec 12 17:27:24.211522 containerd[1911]: time="2025-12-12T17:27:24.211474615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-6fjl5,Uid:e910d3af-f6ed-46c3-b4af-89fd07a8b3ed,Namespace:tigera-operator,Attempt:0,}" Dec 12 17:27:24.192000 audit[3595]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3584 pid=3595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.218014 kernel: audit: type=1300 audit(1765560444.192:456): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3584 pid=3595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.218319 kernel: audit: type=1327 audit(1765560444.192:456): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539326338663334313735343536306434333534383964333235643863 Dec 12 17:27:24.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539326338663334313735343536306434333534383964333235643863 Dec 12 17:27:24.192000 audit: BPF prog-id=141 op=LOAD Dec 12 17:27:24.225542 kernel: audit: type=1334 audit(1765560444.192:457): prog-id=141 op=LOAD Dec 12 17:27:24.225967 kernel: audit: type=1300 audit(1765560444.192:457): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=3584 pid=3595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.192000 audit[3595]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=3584 pid=3595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539326338663334313735343536306434333534383964333235643863 Dec 12 17:27:24.237733 kernel: audit: type=1327 audit(1765560444.192:457): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539326338663334313735343536306434333534383964333235643863 Dec 12 17:27:24.192000 audit: BPF prog-id=142 op=LOAD Dec 12 17:27:24.192000 audit[3595]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=3584 pid=3595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539326338663334313735343536306434333534383964333235643863 Dec 12 17:27:24.192000 audit: BPF prog-id=142 op=UNLOAD Dec 12 17:27:24.192000 audit[3595]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3584 pid=3595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539326338663334313735343536306434333534383964333235643863 Dec 12 17:27:24.192000 audit: BPF prog-id=141 op=UNLOAD Dec 12 17:27:24.192000 audit[3595]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3584 pid=3595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539326338663334313735343536306434333534383964333235643863 Dec 12 17:27:24.194000 audit: BPF prog-id=143 op=LOAD Dec 12 17:27:24.194000 audit[3595]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=3584 pid=3595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539326338663334313735343536306434333534383964333235643863 Dec 12 17:27:24.278098 containerd[1911]: time="2025-12-12T17:27:24.278036072Z" level=info msg="connecting to shim a2d9c5069c5c76b24385ec1ac4c0e128878b5991547ec01878d864b81c8f3542" address="unix:///run/containerd/s/e74a6ed098ce1b5321d9990f9d417bc998c5999cfe1e8d5368a7c070443c03da" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:24.284167 containerd[1911]: time="2025-12-12T17:27:24.284105444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-59kpj,Uid:7b5e2b01-238c-43d7-bc6d-56686cbe8284,Namespace:kube-system,Attempt:0,} returns sandbox id \"e92c8f341754560d435489d325d8c0547c7b26b3b9697f55d8ef41b41ad5cead\"" Dec 12 17:27:24.304373 containerd[1911]: time="2025-12-12T17:27:24.304251680Z" level=info msg="CreateContainer within sandbox \"e92c8f341754560d435489d325d8c0547c7b26b3b9697f55d8ef41b41ad5cead\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 17:27:24.326138 systemd[1]: Started cri-containerd-a2d9c5069c5c76b24385ec1ac4c0e128878b5991547ec01878d864b81c8f3542.scope - libcontainer container a2d9c5069c5c76b24385ec1ac4c0e128878b5991547ec01878d864b81c8f3542. Dec 12 17:27:24.334284 containerd[1911]: time="2025-12-12T17:27:24.334221308Z" level=info msg="Container fcb379e10e2cb7b402d86ff91a139be8264bc04adcb95fc57b6649f7e05940f6: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:27:24.356279 containerd[1911]: time="2025-12-12T17:27:24.356169356Z" level=info msg="CreateContainer within sandbox \"e92c8f341754560d435489d325d8c0547c7b26b3b9697f55d8ef41b41ad5cead\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fcb379e10e2cb7b402d86ff91a139be8264bc04adcb95fc57b6649f7e05940f6\"" Dec 12 17:27:24.357813 containerd[1911]: time="2025-12-12T17:27:24.357716324Z" level=info msg="StartContainer for \"fcb379e10e2cb7b402d86ff91a139be8264bc04adcb95fc57b6649f7e05940f6\"" Dec 12 17:27:24.362841 containerd[1911]: time="2025-12-12T17:27:24.362734916Z" level=info msg="connecting to shim fcb379e10e2cb7b402d86ff91a139be8264bc04adcb95fc57b6649f7e05940f6" address="unix:///run/containerd/s/b15a5123c7fa1cb253571e92ed4de93d7dca805ed9836188615a678660cf8c76" protocol=ttrpc version=3 Dec 12 17:27:24.373000 audit: BPF prog-id=144 op=LOAD Dec 12 17:27:24.374000 audit: BPF prog-id=145 op=LOAD Dec 12 17:27:24.374000 audit[3643]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=3631 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132643963353036396335633736623234333835656331616334633065 Dec 12 17:27:24.376000 audit: BPF prog-id=145 op=UNLOAD Dec 12 17:27:24.376000 audit[3643]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3631 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132643963353036396335633736623234333835656331616334633065 Dec 12 17:27:24.376000 audit: BPF prog-id=146 op=LOAD Dec 12 17:27:24.376000 audit[3643]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3631 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132643963353036396335633736623234333835656331616334633065 Dec 12 17:27:24.376000 audit: BPF prog-id=147 op=LOAD Dec 12 17:27:24.376000 audit[3643]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3631 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132643963353036396335633736623234333835656331616334633065 Dec 12 17:27:24.376000 audit: BPF prog-id=147 op=UNLOAD Dec 12 17:27:24.376000 audit[3643]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3631 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132643963353036396335633736623234333835656331616334633065 Dec 12 17:27:24.376000 audit: BPF prog-id=146 op=UNLOAD Dec 12 17:27:24.376000 audit[3643]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3631 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132643963353036396335633736623234333835656331616334633065 Dec 12 17:27:24.377000 audit: BPF prog-id=148 op=LOAD Dec 12 17:27:24.377000 audit[3643]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3631 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132643963353036396335633736623234333835656331616334633065 Dec 12 17:27:24.422052 systemd[1]: Started cri-containerd-fcb379e10e2cb7b402d86ff91a139be8264bc04adcb95fc57b6649f7e05940f6.scope - libcontainer container fcb379e10e2cb7b402d86ff91a139be8264bc04adcb95fc57b6649f7e05940f6. Dec 12 17:27:24.490164 containerd[1911]: time="2025-12-12T17:27:24.489844485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-6fjl5,Uid:e910d3af-f6ed-46c3-b4af-89fd07a8b3ed,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a2d9c5069c5c76b24385ec1ac4c0e128878b5991547ec01878d864b81c8f3542\"" Dec 12 17:27:24.500719 containerd[1911]: time="2025-12-12T17:27:24.499602873Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 17:27:24.537000 audit: BPF prog-id=149 op=LOAD Dec 12 17:27:24.537000 audit[3662]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3584 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663623337396531306532636237623430326438366666393161313339 Dec 12 17:27:24.537000 audit: BPF prog-id=150 op=LOAD Dec 12 17:27:24.537000 audit[3662]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3584 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663623337396531306532636237623430326438366666393161313339 Dec 12 17:27:24.537000 audit: BPF prog-id=150 op=UNLOAD Dec 12 17:27:24.537000 audit[3662]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3584 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663623337396531306532636237623430326438366666393161313339 Dec 12 17:27:24.537000 audit: BPF prog-id=149 op=UNLOAD Dec 12 17:27:24.537000 audit[3662]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3584 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663623337396531306532636237623430326438366666393161313339 Dec 12 17:27:24.537000 audit: BPF prog-id=151 op=LOAD Dec 12 17:27:24.537000 audit[3662]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3584 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663623337396531306532636237623430326438366666393161313339 Dec 12 17:27:24.588549 containerd[1911]: time="2025-12-12T17:27:24.587614773Z" level=info msg="StartContainer for \"fcb379e10e2cb7b402d86ff91a139be8264bc04adcb95fc57b6649f7e05940f6\" returns successfully" Dec 12 17:27:24.885000 audit[3733]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3733 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:24.885000 audit[3733]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd1bb73b0 a2=0 a3=1 items=0 ppid=3675 pid=3733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.885000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 17:27:24.889000 audit[3735]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3735 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:24.889000 audit[3735]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd2fa3330 a2=0 a3=1 items=0 ppid=3675 pid=3735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.889000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 17:27:24.892000 audit[3737]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3737 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:24.892000 audit[3737]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd7e46090 a2=0 a3=1 items=0 ppid=3675 pid=3737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.892000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 17:27:24.893000 audit[3736]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=3736 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:24.893000 audit[3736]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc7c741c0 a2=0 a3=1 items=0 ppid=3675 pid=3736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.893000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 17:27:24.896000 audit[3739]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3739 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:24.896000 audit[3739]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd68e2760 a2=0 a3=1 items=0 ppid=3675 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.896000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 17:27:24.897000 audit[3738]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3738 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:24.897000 audit[3738]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc9a97170 a2=0 a3=1 items=0 ppid=3675 pid=3738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.897000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 17:27:25.003000 audit[3742]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3742 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:25.003000 audit[3742]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffdb89dbb0 a2=0 a3=1 items=0 ppid=3675 pid=3742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.003000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 17:27:25.012000 audit[3744]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3744 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:25.012000 audit[3744]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd20306d0 a2=0 a3=1 items=0 ppid=3675 pid=3744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.012000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 12 17:27:25.028000 audit[3747]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3747 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:25.028000 audit[3747]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc39a7700 a2=0 a3=1 items=0 ppid=3675 pid=3747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.028000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 12 17:27:25.032000 audit[3748]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3748 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:25.032000 audit[3748]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd6529d80 a2=0 a3=1 items=0 ppid=3675 pid=3748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.032000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 17:27:25.038000 audit[3750]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3750 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:25.038000 audit[3750]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffc29fe10 a2=0 a3=1 items=0 ppid=3675 pid=3750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.038000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 17:27:25.041000 audit[3751]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3751 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:25.041000 audit[3751]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffd414dd0 a2=0 a3=1 items=0 ppid=3675 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.041000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 17:27:25.048000 audit[3753]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3753 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:25.048000 audit[3753]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc0d77430 a2=0 a3=1 items=0 ppid=3675 pid=3753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.048000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 17:27:25.056000 audit[3756]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3756 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:25.056000 audit[3756]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc1aecc10 a2=0 a3=1 items=0 ppid=3675 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.056000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 12 17:27:25.059000 audit[3757]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3757 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:25.059000 audit[3757]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe21f3f60 a2=0 a3=1 items=0 ppid=3675 pid=3757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.059000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 17:27:25.066000 audit[3759]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3759 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:25.066000 audit[3759]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc3543020 a2=0 a3=1 items=0 ppid=3675 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.066000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 17:27:25.069000 audit[3760]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3760 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:25.069000 audit[3760]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd4b628a0 a2=0 a3=1 items=0 ppid=3675 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.069000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 17:27:25.075000 audit[3762]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3762 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:25.075000 audit[3762]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdda046d0 a2=0 a3=1 items=0 ppid=3675 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.075000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 17:27:25.084000 audit[3765]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3765 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:25.084000 audit[3765]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcd1eb410 a2=0 a3=1 items=0 ppid=3675 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.084000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 17:27:25.093000 audit[3768]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3768 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:25.093000 audit[3768]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdc4e7b40 a2=0 a3=1 items=0 ppid=3675 pid=3768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.093000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 17:27:25.097000 audit[3769]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3769 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:25.097000 audit[3769]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd5d10c70 a2=0 a3=1 items=0 ppid=3675 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.097000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 17:27:25.104000 audit[3771]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3771 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:25.104000 audit[3771]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffef6fddc0 a2=0 a3=1 items=0 ppid=3675 pid=3771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.104000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:27:25.113000 audit[3774]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3774 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:25.113000 audit[3774]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffffab75a0 a2=0 a3=1 items=0 ppid=3675 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.113000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:27:25.115000 audit[3775]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3775 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:25.115000 audit[3775]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffff1c5e10 a2=0 a3=1 items=0 ppid=3675 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.115000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 17:27:25.123000 audit[3777]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3777 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:25.123000 audit[3777]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffc5a5b560 a2=0 a3=1 items=0 ppid=3675 pid=3777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.123000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 17:27:25.172000 audit[3783]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3783 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:25.172000 audit[3783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff1b043c0 a2=0 a3=1 items=0 ppid=3675 pid=3783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.172000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:25.184000 audit[3783]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3783 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:25.184000 audit[3783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffff1b043c0 a2=0 a3=1 items=0 ppid=3675 pid=3783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.184000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:25.188000 audit[3788]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3788 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:25.188000 audit[3788]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffdfb0faf0 a2=0 a3=1 items=0 ppid=3675 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.188000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 17:27:25.198000 audit[3791]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3791 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:25.198000 audit[3791]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe504e100 a2=0 a3=1 items=0 ppid=3675 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.198000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 12 17:27:25.211000 audit[3794]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3794 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:25.211000 audit[3794]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff73df600 a2=0 a3=1 items=0 ppid=3675 pid=3794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.211000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 12 17:27:25.216000 audit[3795]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3795 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:25.216000 audit[3795]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd8fe4dc0 a2=0 a3=1 items=0 ppid=3675 pid=3795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.216000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 17:27:25.229000 audit[3797]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3797 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:25.229000 audit[3797]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd9eb1980 a2=0 a3=1 items=0 ppid=3675 pid=3797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.229000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 17:27:25.236000 audit[3798]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3798 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:25.236000 audit[3798]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe487de00 a2=0 a3=1 items=0 ppid=3675 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.236000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 17:27:25.252000 audit[3800]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3800 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:25.252000 audit[3800]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe6bfc690 a2=0 a3=1 items=0 ppid=3675 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.252000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 12 17:27:25.261000 audit[3803]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3803 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:25.261000 audit[3803]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffed947a30 a2=0 a3=1 items=0 ppid=3675 pid=3803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.261000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 17:27:25.264000 audit[3804]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3804 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:25.264000 audit[3804]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd0f4a6e0 a2=0 a3=1 items=0 ppid=3675 pid=3804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.264000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 17:27:25.270000 audit[3806]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3806 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:25.270000 audit[3806]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffda4d2af0 a2=0 a3=1 items=0 ppid=3675 pid=3806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.270000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 17:27:25.273000 audit[3807]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3807 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:25.273000 audit[3807]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff98703f0 a2=0 a3=1 items=0 ppid=3675 pid=3807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.273000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 17:27:25.279000 audit[3809]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3809 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:25.279000 audit[3809]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc4b5bc70 a2=0 a3=1 items=0 ppid=3675 pid=3809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.279000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 17:27:25.288000 audit[3812]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3812 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:25.288000 audit[3812]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffebab3f30 a2=0 a3=1 items=0 ppid=3675 pid=3812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.288000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 17:27:25.297000 audit[3815]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3815 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:25.297000 audit[3815]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff0a4ca10 a2=0 a3=1 items=0 ppid=3675 pid=3815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.297000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 12 17:27:25.301000 audit[3816]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3816 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:25.301000 audit[3816]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffef907ba0 a2=0 a3=1 items=0 ppid=3675 pid=3816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.301000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 17:27:25.306000 audit[3818]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3818 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:25.306000 audit[3818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffcd766840 a2=0 a3=1 items=0 ppid=3675 pid=3818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.306000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:27:25.315000 audit[3821]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3821 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:25.315000 audit[3821]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe951d260 a2=0 a3=1 items=0 ppid=3675 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.315000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:27:25.318000 audit[3822]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3822 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:25.318000 audit[3822]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffee4f3600 a2=0 a3=1 items=0 ppid=3675 pid=3822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.318000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 17:27:25.327000 audit[3824]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3824 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:25.327000 audit[3824]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffca9e4f80 a2=0 a3=1 items=0 ppid=3675 pid=3824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.327000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 17:27:25.331000 audit[3825]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3825 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:25.331000 audit[3825]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffaa7a5a0 a2=0 a3=1 items=0 ppid=3675 pid=3825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.331000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 17:27:25.338000 audit[3827]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3827 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:25.338000 audit[3827]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd1e822f0 a2=0 a3=1 items=0 ppid=3675 pid=3827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.338000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:27:25.350000 audit[3830]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3830 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:25.350000 audit[3830]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd3296c50 a2=0 a3=1 items=0 ppid=3675 pid=3830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.350000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:27:25.364000 audit[3832]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3832 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 17:27:25.364000 audit[3832]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=fffff9047280 a2=0 a3=1 items=0 ppid=3675 pid=3832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.364000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:25.365000 audit[3832]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3832 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 17:27:25.365000 audit[3832]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=fffff9047280 a2=0 a3=1 items=0 ppid=3675 pid=3832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.365000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:26.073742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4049154370.mount: Deactivated successfully. Dec 12 17:27:27.685689 containerd[1911]: time="2025-12-12T17:27:27.685599180Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:27.688719 containerd[1911]: time="2025-12-12T17:27:27.688576776Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 12 17:27:27.691117 containerd[1911]: time="2025-12-12T17:27:27.691032996Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:27.696867 containerd[1911]: time="2025-12-12T17:27:27.696782173Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:27.702494 containerd[1911]: time="2025-12-12T17:27:27.701362333Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 3.201635752s" Dec 12 17:27:27.702494 containerd[1911]: time="2025-12-12T17:27:27.701427613Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 12 17:27:27.715128 containerd[1911]: time="2025-12-12T17:27:27.715054741Z" level=info msg="CreateContainer within sandbox \"a2d9c5069c5c76b24385ec1ac4c0e128878b5991547ec01878d864b81c8f3542\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 17:27:27.731050 containerd[1911]: time="2025-12-12T17:27:27.730111273Z" level=info msg="Container 8ae192b3c40c001770f9f8bcd04be93c0782377c95ce0df3d503c690f231006f: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:27:27.752617 containerd[1911]: time="2025-12-12T17:27:27.752527789Z" level=info msg="CreateContainer within sandbox \"a2d9c5069c5c76b24385ec1ac4c0e128878b5991547ec01878d864b81c8f3542\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8ae192b3c40c001770f9f8bcd04be93c0782377c95ce0df3d503c690f231006f\"" Dec 12 17:27:27.754748 containerd[1911]: time="2025-12-12T17:27:27.753932101Z" level=info msg="StartContainer for \"8ae192b3c40c001770f9f8bcd04be93c0782377c95ce0df3d503c690f231006f\"" Dec 12 17:27:27.757451 containerd[1911]: time="2025-12-12T17:27:27.757374361Z" level=info msg="connecting to shim 8ae192b3c40c001770f9f8bcd04be93c0782377c95ce0df3d503c690f231006f" address="unix:///run/containerd/s/e74a6ed098ce1b5321d9990f9d417bc998c5999cfe1e8d5368a7c070443c03da" protocol=ttrpc version=3 Dec 12 17:27:27.810997 systemd[1]: Started cri-containerd-8ae192b3c40c001770f9f8bcd04be93c0782377c95ce0df3d503c690f231006f.scope - libcontainer container 8ae192b3c40c001770f9f8bcd04be93c0782377c95ce0df3d503c690f231006f. Dec 12 17:27:27.839000 audit: BPF prog-id=152 op=LOAD Dec 12 17:27:27.840000 audit: BPF prog-id=153 op=LOAD Dec 12 17:27:27.840000 audit[3841]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3631 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:27.840000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653139326233633430633030313737306639663862636430346265 Dec 12 17:27:27.840000 audit: BPF prog-id=153 op=UNLOAD Dec 12 17:27:27.840000 audit[3841]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3631 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:27.840000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653139326233633430633030313737306639663862636430346265 Dec 12 17:27:27.840000 audit: BPF prog-id=154 op=LOAD Dec 12 17:27:27.840000 audit[3841]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3631 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:27.840000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653139326233633430633030313737306639663862636430346265 Dec 12 17:27:27.841000 audit: BPF prog-id=155 op=LOAD Dec 12 17:27:27.841000 audit[3841]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3631 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:27.841000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653139326233633430633030313737306639663862636430346265 Dec 12 17:27:27.842000 audit: BPF prog-id=155 op=UNLOAD Dec 12 17:27:27.842000 audit[3841]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3631 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:27.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653139326233633430633030313737306639663862636430346265 Dec 12 17:27:27.842000 audit: BPF prog-id=154 op=UNLOAD Dec 12 17:27:27.842000 audit[3841]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3631 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:27.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653139326233633430633030313737306639663862636430346265 Dec 12 17:27:27.843000 audit: BPF prog-id=156 op=LOAD Dec 12 17:27:27.843000 audit[3841]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3631 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:27.843000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653139326233633430633030313737306639663862636430346265 Dec 12 17:27:27.888384 containerd[1911]: time="2025-12-12T17:27:27.888256429Z" level=info msg="StartContainer for \"8ae192b3c40c001770f9f8bcd04be93c0782377c95ce0df3d503c690f231006f\" returns successfully" Dec 12 17:27:28.461701 kubelet[3528]: I1212 17:27:28.460715 3528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-59kpj" podStartSLOduration=5.460690812 podStartE2EDuration="5.460690812s" podCreationTimestamp="2025-12-12 17:27:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:27:25.448345041 +0000 UTC m=+6.412626524" watchObservedRunningTime="2025-12-12 17:27:28.460690812 +0000 UTC m=+9.424972319" Dec 12 17:27:28.461701 kubelet[3528]: I1212 17:27:28.460874 3528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-6fjl5" podStartSLOduration=2.252849056 podStartE2EDuration="5.460864248s" podCreationTimestamp="2025-12-12 17:27:23 +0000 UTC" firstStartedPulling="2025-12-12 17:27:24.496269021 +0000 UTC m=+5.460550456" lastFinishedPulling="2025-12-12 17:27:27.704284213 +0000 UTC m=+8.668565648" observedRunningTime="2025-12-12 17:27:28.459915996 +0000 UTC m=+9.424197431" watchObservedRunningTime="2025-12-12 17:27:28.460864248 +0000 UTC m=+9.425145683" Dec 12 17:27:35.524294 sudo[2275]: pam_unix(sudo:session): session closed for user root Dec 12 17:27:35.532689 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 12 17:27:35.532816 kernel: audit: type=1106 audit(1765560455.524:534): pid=2275 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:27:35.524000 audit[2275]: USER_END pid=2275 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:27:35.524000 audit[2275]: CRED_DISP pid=2275 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:27:35.541513 kernel: audit: type=1104 audit(1765560455.524:535): pid=2275 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:27:35.555707 sshd[2274]: Connection closed by 139.178.68.195 port 52728 Dec 12 17:27:35.555231 sshd-session[2271]: pam_unix(sshd:session): session closed for user core Dec 12 17:27:35.558000 audit[2271]: USER_END pid=2271 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:27:35.562542 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 17:27:35.563126 systemd[1]: session-7.scope: Consumed 11.143s CPU time, 224.4M memory peak. Dec 12 17:27:35.569478 systemd[1]: sshd@6-172.31.17.228:22-139.178.68.195:52728.service: Deactivated successfully. Dec 12 17:27:35.558000 audit[2271]: CRED_DISP pid=2271 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:27:35.576902 kernel: audit: type=1106 audit(1765560455.558:536): pid=2271 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:27:35.577040 kernel: audit: type=1104 audit(1765560455.558:537): pid=2271 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:27:35.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.17.228:22-139.178.68.195:52728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:35.586719 kernel: audit: type=1131 audit(1765560455.571:538): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.17.228:22-139.178.68.195:52728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:35.586898 systemd-logind[1883]: Session 7 logged out. Waiting for processes to exit. Dec 12 17:27:35.593497 systemd-logind[1883]: Removed session 7. Dec 12 17:27:41.486000 audit[3925]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3925 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:41.486000 audit[3925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff5ad3f20 a2=0 a3=1 items=0 ppid=3675 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:41.498593 kernel: audit: type=1325 audit(1765560461.486:539): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3925 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:41.498766 kernel: audit: type=1300 audit(1765560461.486:539): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff5ad3f20 a2=0 a3=1 items=0 ppid=3675 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:41.502138 kernel: audit: type=1327 audit(1765560461.486:539): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:41.486000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:41.510283 kernel: audit: type=1325 audit(1765560461.505:540): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3925 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:41.505000 audit[3925]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3925 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:41.505000 audit[3925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff5ad3f20 a2=0 a3=1 items=0 ppid=3675 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:41.518786 kernel: audit: type=1300 audit(1765560461.505:540): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff5ad3f20 a2=0 a3=1 items=0 ppid=3675 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:41.505000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:41.525812 kernel: audit: type=1327 audit(1765560461.505:540): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:41.536000 audit[3927]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3927 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:41.536000 audit[3927]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe908f190 a2=0 a3=1 items=0 ppid=3675 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:41.541902 kernel: audit: type=1325 audit(1765560461.536:541): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3927 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:41.536000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:41.549246 kernel: audit: type=1300 audit(1765560461.536:541): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe908f190 a2=0 a3=1 items=0 ppid=3675 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:41.549000 audit[3927]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3927 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:41.554969 kernel: audit: type=1327 audit(1765560461.536:541): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:41.549000 audit[3927]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe908f190 a2=0 a3=1 items=0 ppid=3675 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:41.559858 kernel: audit: type=1325 audit(1765560461.549:542): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3927 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:41.549000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:47.972000 audit[3929]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3929 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:47.979027 kernel: kauditd_printk_skb: 2 callbacks suppressed Dec 12 17:27:47.979193 kernel: audit: type=1325 audit(1765560467.972:543): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3929 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:47.972000 audit[3929]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffef01d200 a2=0 a3=1 items=0 ppid=3675 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:47.987706 kernel: audit: type=1300 audit(1765560467.972:543): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffef01d200 a2=0 a3=1 items=0 ppid=3675 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:47.972000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:47.994344 kernel: audit: type=1327 audit(1765560467.972:543): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:47.980000 audit[3929]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3929 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:48.001706 kernel: audit: type=1325 audit(1765560467.980:544): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3929 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:47.980000 audit[3929]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffef01d200 a2=0 a3=1 items=0 ppid=3675 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:47.980000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:48.012847 kernel: audit: type=1300 audit(1765560467.980:544): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffef01d200 a2=0 a3=1 items=0 ppid=3675 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:48.016692 kernel: audit: type=1327 audit(1765560467.980:544): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:48.103000 audit[3931]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3931 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:48.103000 audit[3931]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc7451970 a2=0 a3=1 items=0 ppid=3675 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:48.115831 kernel: audit: type=1325 audit(1765560468.103:545): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3931 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:48.115891 kernel: audit: type=1300 audit(1765560468.103:545): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc7451970 a2=0 a3=1 items=0 ppid=3675 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:48.103000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:48.120771 kernel: audit: type=1327 audit(1765560468.103:545): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:48.115000 audit[3931]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3931 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:48.124807 kernel: audit: type=1325 audit(1765560468.115:546): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3931 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:48.115000 audit[3931]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc7451970 a2=0 a3=1 items=0 ppid=3675 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:48.115000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:52.510000 audit[3934]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3934 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:52.510000 audit[3934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc5375cc0 a2=0 a3=1 items=0 ppid=3675 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:52.510000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:52.516000 audit[3934]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3934 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:52.516000 audit[3934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc5375cc0 a2=0 a3=1 items=0 ppid=3675 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:52.516000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:52.551577 systemd[1]: Created slice kubepods-besteffort-pod04406693_b0c3_4232_8b24_7ad635137e1c.slice - libcontainer container kubepods-besteffort-pod04406693_b0c3_4232_8b24_7ad635137e1c.slice. Dec 12 17:27:52.558000 audit[3936]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3936 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:52.558000 audit[3936]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe8852b60 a2=0 a3=1 items=0 ppid=3675 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:52.558000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:52.605000 audit[3936]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3936 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:52.614497 kubelet[3528]: I1212 17:27:52.614439 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04406693-b0c3-4232-8b24-7ad635137e1c-tigera-ca-bundle\") pod \"calico-typha-6cfb77479d-fq2xm\" (UID: \"04406693-b0c3-4232-8b24-7ad635137e1c\") " pod="calico-system/calico-typha-6cfb77479d-fq2xm" Dec 12 17:27:52.615990 kubelet[3528]: I1212 17:27:52.614519 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/04406693-b0c3-4232-8b24-7ad635137e1c-typha-certs\") pod \"calico-typha-6cfb77479d-fq2xm\" (UID: \"04406693-b0c3-4232-8b24-7ad635137e1c\") " pod="calico-system/calico-typha-6cfb77479d-fq2xm" Dec 12 17:27:52.615990 kubelet[3528]: I1212 17:27:52.614577 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98c8f\" (UniqueName: \"kubernetes.io/projected/04406693-b0c3-4232-8b24-7ad635137e1c-kube-api-access-98c8f\") pod \"calico-typha-6cfb77479d-fq2xm\" (UID: \"04406693-b0c3-4232-8b24-7ad635137e1c\") " pod="calico-system/calico-typha-6cfb77479d-fq2xm" Dec 12 17:27:52.605000 audit[3936]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe8852b60 a2=0 a3=1 items=0 ppid=3675 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:52.605000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:52.818838 systemd[1]: Created slice kubepods-besteffort-pod5281b2cf_9c68_4092_b2e0_5810ce63f8ad.slice - libcontainer container kubepods-besteffort-pod5281b2cf_9c68_4092_b2e0_5810ce63f8ad.slice. Dec 12 17:27:52.863489 containerd[1911]: time="2025-12-12T17:27:52.863323802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cfb77479d-fq2xm,Uid:04406693-b0c3-4232-8b24-7ad635137e1c,Namespace:calico-system,Attempt:0,}" Dec 12 17:27:52.918010 kubelet[3528]: I1212 17:27:52.916875 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5281b2cf-9c68-4092-b2e0-5810ce63f8ad-lib-modules\") pod \"calico-node-8pvkh\" (UID: \"5281b2cf-9c68-4092-b2e0-5810ce63f8ad\") " pod="calico-system/calico-node-8pvkh" Dec 12 17:27:52.918010 kubelet[3528]: I1212 17:27:52.917716 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5281b2cf-9c68-4092-b2e0-5810ce63f8ad-cni-net-dir\") pod \"calico-node-8pvkh\" (UID: \"5281b2cf-9c68-4092-b2e0-5810ce63f8ad\") " pod="calico-system/calico-node-8pvkh" Dec 12 17:27:52.918010 kubelet[3528]: I1212 17:27:52.917851 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phx4h\" (UniqueName: \"kubernetes.io/projected/5281b2cf-9c68-4092-b2e0-5810ce63f8ad-kube-api-access-phx4h\") pod \"calico-node-8pvkh\" (UID: \"5281b2cf-9c68-4092-b2e0-5810ce63f8ad\") " pod="calico-system/calico-node-8pvkh" Dec 12 17:27:52.919530 kubelet[3528]: I1212 17:27:52.919219 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5281b2cf-9c68-4092-b2e0-5810ce63f8ad-flexvol-driver-host\") pod \"calico-node-8pvkh\" (UID: \"5281b2cf-9c68-4092-b2e0-5810ce63f8ad\") " pod="calico-system/calico-node-8pvkh" Dec 12 17:27:52.919530 kubelet[3528]: I1212 17:27:52.919469 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5281b2cf-9c68-4092-b2e0-5810ce63f8ad-node-certs\") pod \"calico-node-8pvkh\" (UID: \"5281b2cf-9c68-4092-b2e0-5810ce63f8ad\") " pod="calico-system/calico-node-8pvkh" Dec 12 17:27:52.921080 kubelet[3528]: I1212 17:27:52.920504 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5281b2cf-9c68-4092-b2e0-5810ce63f8ad-cni-bin-dir\") pod \"calico-node-8pvkh\" (UID: \"5281b2cf-9c68-4092-b2e0-5810ce63f8ad\") " pod="calico-system/calico-node-8pvkh" Dec 12 17:27:52.921080 kubelet[3528]: I1212 17:27:52.920704 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5281b2cf-9c68-4092-b2e0-5810ce63f8ad-tigera-ca-bundle\") pod \"calico-node-8pvkh\" (UID: \"5281b2cf-9c68-4092-b2e0-5810ce63f8ad\") " pod="calico-system/calico-node-8pvkh" Dec 12 17:27:52.921080 kubelet[3528]: I1212 17:27:52.920914 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5281b2cf-9c68-4092-b2e0-5810ce63f8ad-var-run-calico\") pod \"calico-node-8pvkh\" (UID: \"5281b2cf-9c68-4092-b2e0-5810ce63f8ad\") " pod="calico-system/calico-node-8pvkh" Dec 12 17:27:52.921080 kubelet[3528]: I1212 17:27:52.921012 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5281b2cf-9c68-4092-b2e0-5810ce63f8ad-policysync\") pod \"calico-node-8pvkh\" (UID: \"5281b2cf-9c68-4092-b2e0-5810ce63f8ad\") " pod="calico-system/calico-node-8pvkh" Dec 12 17:27:52.924273 kubelet[3528]: I1212 17:27:52.924009 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5281b2cf-9c68-4092-b2e0-5810ce63f8ad-cni-log-dir\") pod \"calico-node-8pvkh\" (UID: \"5281b2cf-9c68-4092-b2e0-5810ce63f8ad\") " pod="calico-system/calico-node-8pvkh" Dec 12 17:27:52.924273 kubelet[3528]: I1212 17:27:52.924096 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5281b2cf-9c68-4092-b2e0-5810ce63f8ad-xtables-lock\") pod \"calico-node-8pvkh\" (UID: \"5281b2cf-9c68-4092-b2e0-5810ce63f8ad\") " pod="calico-system/calico-node-8pvkh" Dec 12 17:27:52.924273 kubelet[3528]: I1212 17:27:52.924150 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5281b2cf-9c68-4092-b2e0-5810ce63f8ad-var-lib-calico\") pod \"calico-node-8pvkh\" (UID: \"5281b2cf-9c68-4092-b2e0-5810ce63f8ad\") " pod="calico-system/calico-node-8pvkh" Dec 12 17:27:52.930371 containerd[1911]: time="2025-12-12T17:27:52.930302678Z" level=info msg="connecting to shim d9e28d5d778ada73a69eb865b706d09ffe4fcb473f73cd3e495c24a706e62fa2" address="unix:///run/containerd/s/f9c33d975ecfbf4d9a0df1d8e0341a49f1651fe9c83a8aaf9f987fd2d52926b2" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:53.017114 systemd[1]: Started cri-containerd-d9e28d5d778ada73a69eb865b706d09ffe4fcb473f73cd3e495c24a706e62fa2.scope - libcontainer container d9e28d5d778ada73a69eb865b706d09ffe4fcb473f73cd3e495c24a706e62fa2. Dec 12 17:27:53.034704 kubelet[3528]: E1212 17:27:53.033991 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5phxm" podUID="5ad89cf6-178c-4c89-9906-56d3d4e0dba0" Dec 12 17:27:53.037587 kubelet[3528]: E1212 17:27:53.035791 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.037587 kubelet[3528]: W1212 17:27:53.035832 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.037587 kubelet[3528]: E1212 17:27:53.035876 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.037587 kubelet[3528]: E1212 17:27:53.037221 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.037587 kubelet[3528]: W1212 17:27:53.037277 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.037587 kubelet[3528]: E1212 17:27:53.037311 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.039904 kubelet[3528]: E1212 17:27:53.039863 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.040109 kubelet[3528]: W1212 17:27:53.040080 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.040231 kubelet[3528]: E1212 17:27:53.040201 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.044227 kubelet[3528]: E1212 17:27:53.041957 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.044485 kubelet[3528]: W1212 17:27:53.044445 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.044621 kubelet[3528]: E1212 17:27:53.044596 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.045844 kubelet[3528]: E1212 17:27:53.045804 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.048822 kubelet[3528]: W1212 17:27:53.048743 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.049107 kubelet[3528]: E1212 17:27:53.049078 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.058922 kubelet[3528]: E1212 17:27:53.058263 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.058922 kubelet[3528]: W1212 17:27:53.058307 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.058922 kubelet[3528]: E1212 17:27:53.058343 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.065265 kubelet[3528]: E1212 17:27:53.065133 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.066806 kubelet[3528]: W1212 17:27:53.066056 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.067255 kubelet[3528]: E1212 17:27:53.066745 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.071193 kubelet[3528]: E1212 17:27:53.070840 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.071193 kubelet[3528]: W1212 17:27:53.071071 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.071603 kubelet[3528]: E1212 17:27:53.071112 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.074247 kubelet[3528]: E1212 17:27:53.074212 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.076203 kubelet[3528]: W1212 17:27:53.074717 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.076203 kubelet[3528]: E1212 17:27:53.074768 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.078836 kubelet[3528]: E1212 17:27:53.078571 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.079070 kubelet[3528]: W1212 17:27:53.079034 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.079200 kubelet[3528]: E1212 17:27:53.079175 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.081088 kubelet[3528]: E1212 17:27:53.080783 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.081088 kubelet[3528]: W1212 17:27:53.080845 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.081088 kubelet[3528]: E1212 17:27:53.080881 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.081448 kubelet[3528]: E1212 17:27:53.081422 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.081582 kubelet[3528]: W1212 17:27:53.081555 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.081726 kubelet[3528]: E1212 17:27:53.081701 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.083423 kubelet[3528]: E1212 17:27:53.082859 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.083423 kubelet[3528]: W1212 17:27:53.082934 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.083423 kubelet[3528]: E1212 17:27:53.082972 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.084855 kubelet[3528]: E1212 17:27:53.084675 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.084855 kubelet[3528]: W1212 17:27:53.084711 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.084855 kubelet[3528]: E1212 17:27:53.084743 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.086893 kubelet[3528]: E1212 17:27:53.086276 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.086893 kubelet[3528]: W1212 17:27:53.086722 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.086893 kubelet[3528]: E1212 17:27:53.086761 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.087969 kubelet[3528]: E1212 17:27:53.087931 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.089590 kubelet[3528]: W1212 17:27:53.088395 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.089990 kubelet[3528]: E1212 17:27:53.089946 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.091127 kubelet[3528]: E1212 17:27:53.091088 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.091677 kubelet[3528]: W1212 17:27:53.091335 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.091677 kubelet[3528]: E1212 17:27:53.091380 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.092016 kubelet[3528]: E1212 17:27:53.091989 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.092181 kubelet[3528]: W1212 17:27:53.092153 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.092492 kubelet[3528]: E1212 17:27:53.092460 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.096024 kubelet[3528]: E1212 17:27:53.095800 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.096024 kubelet[3528]: W1212 17:27:53.095844 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.096024 kubelet[3528]: E1212 17:27:53.095881 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.096772 kubelet[3528]: E1212 17:27:53.096741 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.097238 kubelet[3528]: W1212 17:27:53.097069 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.097238 kubelet[3528]: E1212 17:27:53.097204 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.098187 kubelet[3528]: E1212 17:27:53.098004 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.098187 kubelet[3528]: W1212 17:27:53.098107 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.098187 kubelet[3528]: E1212 17:27:53.098153 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.099557 kubelet[3528]: E1212 17:27:53.099307 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.099557 kubelet[3528]: W1212 17:27:53.099339 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.099557 kubelet[3528]: E1212 17:27:53.099369 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.102456 kubelet[3528]: E1212 17:27:53.102363 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.102706 kubelet[3528]: W1212 17:27:53.102587 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.102706 kubelet[3528]: E1212 17:27:53.102632 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.104698 kubelet[3528]: E1212 17:27:53.103341 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.104917 kubelet[3528]: W1212 17:27:53.104713 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.104917 kubelet[3528]: E1212 17:27:53.104780 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.105455 kubelet[3528]: E1212 17:27:53.105409 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.105740 kubelet[3528]: W1212 17:27:53.105470 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.105740 kubelet[3528]: E1212 17:27:53.105503 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.106842 kubelet[3528]: E1212 17:27:53.106783 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.106842 kubelet[3528]: W1212 17:27:53.106825 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.107281 kubelet[3528]: E1212 17:27:53.106859 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.108307 kubelet[3528]: E1212 17:27:53.108242 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.108307 kubelet[3528]: W1212 17:27:53.108284 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.109339 kubelet[3528]: E1212 17:27:53.108318 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.110934 kubelet[3528]: E1212 17:27:53.110882 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.110934 kubelet[3528]: W1212 17:27:53.110924 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.111229 kubelet[3528]: E1212 17:27:53.110960 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.111538 kubelet[3528]: E1212 17:27:53.111501 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.111538 kubelet[3528]: W1212 17:27:53.111532 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.111910 kubelet[3528]: E1212 17:27:53.111560 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.112202 kubelet[3528]: E1212 17:27:53.112161 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.112436 kubelet[3528]: W1212 17:27:53.112197 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.112436 kubelet[3528]: E1212 17:27:53.112272 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.112926 kubelet[3528]: E1212 17:27:53.112877 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.112926 kubelet[3528]: W1212 17:27:53.112912 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.115116 kubelet[3528]: E1212 17:27:53.112945 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.115116 kubelet[3528]: E1212 17:27:53.113341 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.115116 kubelet[3528]: W1212 17:27:53.113362 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.115116 kubelet[3528]: E1212 17:27:53.113404 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.115812 kubelet[3528]: E1212 17:27:53.115764 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.115812 kubelet[3528]: W1212 17:27:53.115803 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.116020 kubelet[3528]: E1212 17:27:53.115848 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.116281 kubelet[3528]: E1212 17:27:53.116220 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.116281 kubelet[3528]: W1212 17:27:53.116250 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.116281 kubelet[3528]: E1212 17:27:53.116276 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.117106 kubelet[3528]: E1212 17:27:53.117037 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.117106 kubelet[3528]: W1212 17:27:53.117080 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.117823 kubelet[3528]: E1212 17:27:53.117115 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.119158 kubelet[3528]: E1212 17:27:53.119099 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.119158 kubelet[3528]: W1212 17:27:53.119143 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.119382 kubelet[3528]: E1212 17:27:53.119182 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.119675 kubelet[3528]: E1212 17:27:53.119608 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.119675 kubelet[3528]: W1212 17:27:53.119641 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.119790 kubelet[3528]: E1212 17:27:53.119710 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.120087 kubelet[3528]: E1212 17:27:53.120034 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.120087 kubelet[3528]: W1212 17:27:53.120065 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.120087 kubelet[3528]: E1212 17:27:53.120089 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.120469 kubelet[3528]: E1212 17:27:53.120428 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.120469 kubelet[3528]: W1212 17:27:53.120458 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.120599 kubelet[3528]: E1212 17:27:53.120482 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.120860 kubelet[3528]: E1212 17:27:53.120806 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.120860 kubelet[3528]: W1212 17:27:53.120834 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.120860 kubelet[3528]: E1212 17:27:53.120857 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.121248 kubelet[3528]: E1212 17:27:53.121147 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.121248 kubelet[3528]: W1212 17:27:53.121175 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.121248 kubelet[3528]: E1212 17:27:53.121198 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.121531 kubelet[3528]: E1212 17:27:53.121491 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.121531 kubelet[3528]: W1212 17:27:53.121518 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.121769 kubelet[3528]: E1212 17:27:53.121544 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.122025 kubelet[3528]: E1212 17:27:53.121883 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.122025 kubelet[3528]: W1212 17:27:53.121911 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.122025 kubelet[3528]: E1212 17:27:53.121933 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.122286 kubelet[3528]: E1212 17:27:53.122220 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.122286 kubelet[3528]: W1212 17:27:53.122245 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.122286 kubelet[3528]: E1212 17:27:53.122272 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.122685 kubelet[3528]: E1212 17:27:53.122554 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.122685 kubelet[3528]: W1212 17:27:53.122581 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.122685 kubelet[3528]: E1212 17:27:53.122603 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.123062 kubelet[3528]: E1212 17:27:53.122924 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.123062 kubelet[3528]: W1212 17:27:53.122953 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.123062 kubelet[3528]: E1212 17:27:53.122977 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.123331 kubelet[3528]: E1212 17:27:53.123288 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.123331 kubelet[3528]: W1212 17:27:53.123315 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.123448 kubelet[3528]: E1212 17:27:53.123337 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.123679 kubelet[3528]: E1212 17:27:53.123617 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.123679 kubelet[3528]: W1212 17:27:53.123643 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.123810 kubelet[3528]: E1212 17:27:53.123704 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.124058 kubelet[3528]: E1212 17:27:53.123987 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.124058 kubelet[3528]: W1212 17:27:53.124015 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.124058 kubelet[3528]: E1212 17:27:53.124037 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.124428 kubelet[3528]: E1212 17:27:53.124338 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.124428 kubelet[3528]: W1212 17:27:53.124370 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.124428 kubelet[3528]: E1212 17:27:53.124394 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.126521 kubelet[3528]: E1212 17:27:53.125996 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.126521 kubelet[3528]: W1212 17:27:53.126492 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.127300 kubelet[3528]: E1212 17:27:53.126553 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.127300 kubelet[3528]: I1212 17:27:53.126634 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ad89cf6-178c-4c89-9906-56d3d4e0dba0-kubelet-dir\") pod \"csi-node-driver-5phxm\" (UID: \"5ad89cf6-178c-4c89-9906-56d3d4e0dba0\") " pod="calico-system/csi-node-driver-5phxm" Dec 12 17:27:53.127859 kubelet[3528]: E1212 17:27:53.127803 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.127956 kubelet[3528]: W1212 17:27:53.127865 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.127956 kubelet[3528]: E1212 17:27:53.127898 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.128073 kubelet[3528]: I1212 17:27:53.127976 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5ad89cf6-178c-4c89-9906-56d3d4e0dba0-registration-dir\") pod \"csi-node-driver-5phxm\" (UID: \"5ad89cf6-178c-4c89-9906-56d3d4e0dba0\") " pod="calico-system/csi-node-driver-5phxm" Dec 12 17:27:53.129226 kubelet[3528]: E1212 17:27:53.129180 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.129226 kubelet[3528]: W1212 17:27:53.129218 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.129382 kubelet[3528]: E1212 17:27:53.129275 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.129470 kubelet[3528]: I1212 17:27:53.129377 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5ad89cf6-178c-4c89-9906-56d3d4e0dba0-socket-dir\") pod \"csi-node-driver-5phxm\" (UID: \"5ad89cf6-178c-4c89-9906-56d3d4e0dba0\") " pod="calico-system/csi-node-driver-5phxm" Dec 12 17:27:53.130308 kubelet[3528]: E1212 17:27:53.130261 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.130548 kubelet[3528]: W1212 17:27:53.130319 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.130548 kubelet[3528]: E1212 17:27:53.130355 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.131779 kubelet[3528]: E1212 17:27:53.131728 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.131936 kubelet[3528]: W1212 17:27:53.131789 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.131936 kubelet[3528]: E1212 17:27:53.131826 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.133267 kubelet[3528]: E1212 17:27:53.133203 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.133267 kubelet[3528]: W1212 17:27:53.133256 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.133900 kubelet[3528]: E1212 17:27:53.133306 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.133900 kubelet[3528]: I1212 17:27:53.133562 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5ad89cf6-178c-4c89-9906-56d3d4e0dba0-varrun\") pod \"csi-node-driver-5phxm\" (UID: \"5ad89cf6-178c-4c89-9906-56d3d4e0dba0\") " pod="calico-system/csi-node-driver-5phxm" Dec 12 17:27:53.134055 kubelet[3528]: E1212 17:27:53.134015 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.134055 kubelet[3528]: W1212 17:27:53.134037 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.134434 kubelet[3528]: E1212 17:27:53.134092 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.134949 kubelet[3528]: E1212 17:27:53.134839 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.134949 kubelet[3528]: W1212 17:27:53.134878 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.135545 kubelet[3528]: E1212 17:27:53.135464 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.136092 kubelet[3528]: E1212 17:27:53.136049 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.136092 kubelet[3528]: W1212 17:27:53.136086 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.136364 kubelet[3528]: E1212 17:27:53.136117 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.136642 kubelet[3528]: E1212 17:27:53.136438 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.136642 kubelet[3528]: W1212 17:27:53.136468 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.136642 kubelet[3528]: E1212 17:27:53.136491 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.136642 kubelet[3528]: E1212 17:27:53.136889 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.136642 kubelet[3528]: W1212 17:27:53.136907 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.136642 kubelet[3528]: E1212 17:27:53.136927 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.136642 kubelet[3528]: E1212 17:27:53.137238 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.136642 kubelet[3528]: W1212 17:27:53.137255 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.136642 kubelet[3528]: E1212 17:27:53.137276 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.136642 kubelet[3528]: E1212 17:27:53.137620 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.138298 kubelet[3528]: W1212 17:27:53.137635 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.138298 kubelet[3528]: E1212 17:27:53.137653 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.138298 kubelet[3528]: I1212 17:27:53.137744 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzhvk\" (UniqueName: \"kubernetes.io/projected/5ad89cf6-178c-4c89-9906-56d3d4e0dba0-kube-api-access-dzhvk\") pod \"csi-node-driver-5phxm\" (UID: \"5ad89cf6-178c-4c89-9906-56d3d4e0dba0\") " pod="calico-system/csi-node-driver-5phxm" Dec 12 17:27:53.138298 kubelet[3528]: E1212 17:27:53.138137 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.138298 kubelet[3528]: W1212 17:27:53.138158 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.138298 kubelet[3528]: E1212 17:27:53.138179 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.138581 kubelet[3528]: E1212 17:27:53.138446 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.138581 kubelet[3528]: W1212 17:27:53.138467 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.138581 kubelet[3528]: E1212 17:27:53.138486 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.157879 kubelet[3528]: E1212 17:27:53.156935 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.157879 kubelet[3528]: W1212 17:27:53.157757 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.157879 kubelet[3528]: E1212 17:27:53.157805 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.221113 kernel: kauditd_printk_skb: 14 callbacks suppressed Dec 12 17:27:53.221268 kernel: audit: type=1334 audit(1765560473.216:551): prog-id=157 op=LOAD Dec 12 17:27:53.216000 audit: BPF prog-id=157 op=LOAD Dec 12 17:27:53.221000 audit: BPF prog-id=158 op=LOAD Dec 12 17:27:53.221000 audit[3958]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=3948 pid=3958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:53.233707 kernel: audit: type=1334 audit(1765560473.221:552): prog-id=158 op=LOAD Dec 12 17:27:53.233833 kernel: audit: type=1300 audit(1765560473.221:552): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=3948 pid=3958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:53.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439653238643564373738616461373361363965623836356237303664 Dec 12 17:27:53.241694 kernel: audit: type=1327 audit(1765560473.221:552): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439653238643564373738616461373361363965623836356237303664 Dec 12 17:27:53.241815 kubelet[3528]: E1212 17:27:53.238731 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.241815 kubelet[3528]: W1212 17:27:53.238798 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.241815 kubelet[3528]: E1212 17:27:53.238831 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.243161 kubelet[3528]: E1212 17:27:53.242395 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.243161 kubelet[3528]: W1212 17:27:53.242435 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.243161 kubelet[3528]: E1212 17:27:53.242470 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.243161 kubelet[3528]: E1212 17:27:53.242966 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.243161 kubelet[3528]: W1212 17:27:53.243014 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.243161 kubelet[3528]: E1212 17:27:53.243043 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.222000 audit: BPF prog-id=158 op=UNLOAD Dec 12 17:27:53.246092 kernel: audit: type=1334 audit(1765560473.222:553): prog-id=158 op=UNLOAD Dec 12 17:27:53.246226 kubelet[3528]: E1212 17:27:53.245097 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.246226 kubelet[3528]: W1212 17:27:53.245128 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.246226 kubelet[3528]: E1212 17:27:53.245160 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.246226 kubelet[3528]: E1212 17:27:53.245573 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.246226 kubelet[3528]: W1212 17:27:53.245590 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.246226 kubelet[3528]: E1212 17:27:53.245610 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.246226 kubelet[3528]: E1212 17:27:53.245954 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.246226 kubelet[3528]: W1212 17:27:53.245979 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.246226 kubelet[3528]: E1212 17:27:53.246001 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.246730 kubelet[3528]: E1212 17:27:53.246333 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.246730 kubelet[3528]: W1212 17:27:53.246354 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.246730 kubelet[3528]: E1212 17:27:53.246377 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.247195 kubelet[3528]: E1212 17:27:53.247131 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.247195 kubelet[3528]: W1212 17:27:53.247166 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.247195 kubelet[3528]: E1212 17:27:53.247196 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.254838 kernel: audit: type=1300 audit(1765560473.222:553): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3948 pid=3958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:53.222000 audit[3958]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3948 pid=3958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:53.255100 kubelet[3528]: E1212 17:27:53.247734 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.255100 kubelet[3528]: W1212 17:27:53.247754 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.255100 kubelet[3528]: E1212 17:27:53.247779 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.255100 kubelet[3528]: E1212 17:27:53.248809 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.255100 kubelet[3528]: W1212 17:27:53.248834 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.255100 kubelet[3528]: E1212 17:27:53.248861 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.255100 kubelet[3528]: E1212 17:27:53.249137 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.255100 kubelet[3528]: W1212 17:27:53.249158 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.255100 kubelet[3528]: E1212 17:27:53.249178 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.255100 kubelet[3528]: E1212 17:27:53.250911 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.255713 kubelet[3528]: W1212 17:27:53.250932 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.255713 kubelet[3528]: E1212 17:27:53.250957 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.255713 kubelet[3528]: E1212 17:27:53.251221 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.255713 kubelet[3528]: W1212 17:27:53.251236 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.255713 kubelet[3528]: E1212 17:27:53.251253 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.255713 kubelet[3528]: E1212 17:27:53.251477 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.255713 kubelet[3528]: W1212 17:27:53.251491 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.255713 kubelet[3528]: E1212 17:27:53.251514 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.255713 kubelet[3528]: E1212 17:27:53.251790 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.255713 kubelet[3528]: W1212 17:27:53.251805 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.256440 kubelet[3528]: E1212 17:27:53.251824 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.256440 kubelet[3528]: E1212 17:27:53.252315 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.256440 kubelet[3528]: W1212 17:27:53.252332 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.256440 kubelet[3528]: E1212 17:27:53.252352 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.256440 kubelet[3528]: E1212 17:27:53.252599 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.256440 kubelet[3528]: W1212 17:27:53.252614 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.256440 kubelet[3528]: E1212 17:27:53.252631 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.256440 kubelet[3528]: E1212 17:27:53.252908 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.256440 kubelet[3528]: W1212 17:27:53.252923 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.256440 kubelet[3528]: E1212 17:27:53.252940 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.257053 kubelet[3528]: E1212 17:27:53.255709 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.257053 kubelet[3528]: W1212 17:27:53.255738 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.257053 kubelet[3528]: E1212 17:27:53.255771 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.257053 kubelet[3528]: E1212 17:27:53.256281 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.257053 kubelet[3528]: W1212 17:27:53.256303 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.257053 kubelet[3528]: E1212 17:27:53.256326 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.257053 kubelet[3528]: E1212 17:27:53.256736 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.257053 kubelet[3528]: W1212 17:27:53.256756 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.257053 kubelet[3528]: E1212 17:27:53.256778 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.265027 kernel: audit: type=1327 audit(1765560473.222:553): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439653238643564373738616461373361363965623836356237303664 Dec 12 17:27:53.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439653238643564373738616461373361363965623836356237303664 Dec 12 17:27:53.265149 kubelet[3528]: E1212 17:27:53.257101 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.265149 kubelet[3528]: W1212 17:27:53.257122 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.265149 kubelet[3528]: E1212 17:27:53.257141 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.265149 kubelet[3528]: E1212 17:27:53.257519 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.265149 kubelet[3528]: W1212 17:27:53.257538 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.265149 kubelet[3528]: E1212 17:27:53.257574 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.265149 kubelet[3528]: E1212 17:27:53.257942 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.265149 kubelet[3528]: W1212 17:27:53.257964 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.265149 kubelet[3528]: E1212 17:27:53.257990 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.265149 kubelet[3528]: E1212 17:27:53.258716 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.265631 kubelet[3528]: W1212 17:27:53.258744 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.265631 kubelet[3528]: E1212 17:27:53.258774 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.222000 audit: BPF prog-id=159 op=LOAD Dec 12 17:27:53.269586 kernel: audit: type=1334 audit(1765560473.222:554): prog-id=159 op=LOAD Dec 12 17:27:53.222000 audit[3958]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=3948 pid=3958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:53.275761 kernel: audit: type=1300 audit(1765560473.222:554): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=3948 pid=3958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:53.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439653238643564373738616461373361363965623836356237303664 Dec 12 17:27:53.281859 kernel: audit: type=1327 audit(1765560473.222:554): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439653238643564373738616461373361363965623836356237303664 Dec 12 17:27:53.223000 audit: BPF prog-id=160 op=LOAD Dec 12 17:27:53.223000 audit[3958]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=3948 pid=3958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:53.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439653238643564373738616461373361363965623836356237303664 Dec 12 17:27:53.224000 audit: BPF prog-id=160 op=UNLOAD Dec 12 17:27:53.224000 audit[3958]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3948 pid=3958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:53.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439653238643564373738616461373361363965623836356237303664 Dec 12 17:27:53.224000 audit: BPF prog-id=159 op=UNLOAD Dec 12 17:27:53.224000 audit[3958]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3948 pid=3958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:53.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439653238643564373738616461373361363965623836356237303664 Dec 12 17:27:53.224000 audit: BPF prog-id=161 op=LOAD Dec 12 17:27:53.224000 audit[3958]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=3948 pid=3958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:53.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439653238643564373738616461373361363965623836356237303664 Dec 12 17:27:53.302124 kubelet[3528]: E1212 17:27:53.302061 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:53.302124 kubelet[3528]: W1212 17:27:53.302102 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:53.302342 kubelet[3528]: E1212 17:27:53.302137 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:53.430926 containerd[1911]: time="2025-12-12T17:27:53.430850928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8pvkh,Uid:5281b2cf-9c68-4092-b2e0-5810ce63f8ad,Namespace:calico-system,Attempt:0,}" Dec 12 17:27:53.471797 containerd[1911]: time="2025-12-12T17:27:53.471694129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cfb77479d-fq2xm,Uid:04406693-b0c3-4232-8b24-7ad635137e1c,Namespace:calico-system,Attempt:0,} returns sandbox id \"d9e28d5d778ada73a69eb865b706d09ffe4fcb473f73cd3e495c24a706e62fa2\"" Dec 12 17:27:53.477010 containerd[1911]: time="2025-12-12T17:27:53.476935093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 17:27:53.503559 containerd[1911]: time="2025-12-12T17:27:53.502062061Z" level=info msg="connecting to shim eb6da54ee02007cdeba889c55bf51033da313e833ebeecce50d2ae26faba0ac8" address="unix:///run/containerd/s/33283d0e0817781778cca616bef90793eadba47f6131a547cf6d2d7d8e3ef99c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:53.615818 systemd[1]: Started cri-containerd-eb6da54ee02007cdeba889c55bf51033da313e833ebeecce50d2ae26faba0ac8.scope - libcontainer container eb6da54ee02007cdeba889c55bf51033da313e833ebeecce50d2ae26faba0ac8. Dec 12 17:27:53.828000 audit[4129]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=4129 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:53.828000 audit[4129]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff98339c0 a2=0 a3=1 items=0 ppid=3675 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:53.828000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:53.830000 audit: BPF prog-id=162 op=LOAD Dec 12 17:27:53.832000 audit: BPF prog-id=163 op=LOAD Dec 12 17:27:53.834000 audit[4129]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=4129 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:53.832000 audit[4108]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=4096 pid=4108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:53.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562366461353465653032303037636465626138383963353562663531 Dec 12 17:27:53.835000 audit: BPF prog-id=163 op=UNLOAD Dec 12 17:27:53.834000 audit[4129]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff98339c0 a2=0 a3=1 items=0 ppid=3675 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:53.834000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:53.835000 audit[4108]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4096 pid=4108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:53.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562366461353465653032303037636465626138383963353562663531 Dec 12 17:27:53.837000 audit: BPF prog-id=164 op=LOAD Dec 12 17:27:53.837000 audit[4108]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=4096 pid=4108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:53.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562366461353465653032303037636465626138383963353562663531 Dec 12 17:27:53.838000 audit: BPF prog-id=165 op=LOAD Dec 12 17:27:53.838000 audit[4108]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=4096 pid=4108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:53.838000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562366461353465653032303037636465626138383963353562663531 Dec 12 17:27:53.838000 audit: BPF prog-id=165 op=UNLOAD Dec 12 17:27:53.838000 audit[4108]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4096 pid=4108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:53.838000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562366461353465653032303037636465626138383963353562663531 Dec 12 17:27:53.840000 audit: BPF prog-id=164 op=UNLOAD Dec 12 17:27:53.840000 audit[4108]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4096 pid=4108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:53.840000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562366461353465653032303037636465626138383963353562663531 Dec 12 17:27:53.840000 audit: BPF prog-id=166 op=LOAD Dec 12 17:27:53.840000 audit[4108]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=4096 pid=4108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:53.840000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562366461353465653032303037636465626138383963353562663531 Dec 12 17:27:53.878635 containerd[1911]: time="2025-12-12T17:27:53.878532639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8pvkh,Uid:5281b2cf-9c68-4092-b2e0-5810ce63f8ad,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb6da54ee02007cdeba889c55bf51033da313e833ebeecce50d2ae26faba0ac8\"" Dec 12 17:27:54.324145 kubelet[3528]: E1212 17:27:54.324069 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5phxm" podUID="5ad89cf6-178c-4c89-9906-56d3d4e0dba0" Dec 12 17:27:54.672017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1683040043.mount: Deactivated successfully. Dec 12 17:27:55.515213 containerd[1911]: time="2025-12-12T17:27:55.515128743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:55.518340 containerd[1911]: time="2025-12-12T17:27:55.518253627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31717602" Dec 12 17:27:55.520173 containerd[1911]: time="2025-12-12T17:27:55.520107219Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:55.528216 containerd[1911]: time="2025-12-12T17:27:55.528129063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:55.529528 containerd[1911]: time="2025-12-12T17:27:55.529318839Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.052305542s" Dec 12 17:27:55.529528 containerd[1911]: time="2025-12-12T17:27:55.529372359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 12 17:27:55.532690 containerd[1911]: time="2025-12-12T17:27:55.532570443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 17:27:55.565713 containerd[1911]: time="2025-12-12T17:27:55.565589835Z" level=info msg="CreateContainer within sandbox \"d9e28d5d778ada73a69eb865b706d09ffe4fcb473f73cd3e495c24a706e62fa2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 17:27:55.583689 containerd[1911]: time="2025-12-12T17:27:55.582822159Z" level=info msg="Container 259a87d95ff090640bb21016eff7cfc64ce90eec76615450548a46c1d5963c15: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:27:55.590424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3821145340.mount: Deactivated successfully. Dec 12 17:27:55.612495 containerd[1911]: time="2025-12-12T17:27:55.612327531Z" level=info msg="CreateContainer within sandbox \"d9e28d5d778ada73a69eb865b706d09ffe4fcb473f73cd3e495c24a706e62fa2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"259a87d95ff090640bb21016eff7cfc64ce90eec76615450548a46c1d5963c15\"" Dec 12 17:27:55.616897 containerd[1911]: time="2025-12-12T17:27:55.614652639Z" level=info msg="StartContainer for \"259a87d95ff090640bb21016eff7cfc64ce90eec76615450548a46c1d5963c15\"" Dec 12 17:27:55.618239 containerd[1911]: time="2025-12-12T17:27:55.618092079Z" level=info msg="connecting to shim 259a87d95ff090640bb21016eff7cfc64ce90eec76615450548a46c1d5963c15" address="unix:///run/containerd/s/f9c33d975ecfbf4d9a0df1d8e0341a49f1651fe9c83a8aaf9f987fd2d52926b2" protocol=ttrpc version=3 Dec 12 17:27:55.679045 systemd[1]: Started cri-containerd-259a87d95ff090640bb21016eff7cfc64ce90eec76615450548a46c1d5963c15.scope - libcontainer container 259a87d95ff090640bb21016eff7cfc64ce90eec76615450548a46c1d5963c15. Dec 12 17:27:55.711000 audit: BPF prog-id=167 op=LOAD Dec 12 17:27:55.712000 audit: BPF prog-id=168 op=LOAD Dec 12 17:27:55.712000 audit[4150]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3948 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:55.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235396138376439356666303930363430626232313031366566663763 Dec 12 17:27:55.712000 audit: BPF prog-id=168 op=UNLOAD Dec 12 17:27:55.712000 audit[4150]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3948 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:55.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235396138376439356666303930363430626232313031366566663763 Dec 12 17:27:55.713000 audit: BPF prog-id=169 op=LOAD Dec 12 17:27:55.713000 audit[4150]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3948 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:55.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235396138376439356666303930363430626232313031366566663763 Dec 12 17:27:55.714000 audit: BPF prog-id=170 op=LOAD Dec 12 17:27:55.714000 audit[4150]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3948 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:55.714000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235396138376439356666303930363430626232313031366566663763 Dec 12 17:27:55.714000 audit: BPF prog-id=170 op=UNLOAD Dec 12 17:27:55.714000 audit[4150]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3948 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:55.714000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235396138376439356666303930363430626232313031366566663763 Dec 12 17:27:55.714000 audit: BPF prog-id=169 op=UNLOAD Dec 12 17:27:55.714000 audit[4150]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3948 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:55.714000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235396138376439356666303930363430626232313031366566663763 Dec 12 17:27:55.715000 audit: BPF prog-id=171 op=LOAD Dec 12 17:27:55.715000 audit[4150]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3948 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:55.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235396138376439356666303930363430626232313031366566663763 Dec 12 17:27:55.786570 containerd[1911]: time="2025-12-12T17:27:55.785372920Z" level=info msg="StartContainer for \"259a87d95ff090640bb21016eff7cfc64ce90eec76615450548a46c1d5963c15\" returns successfully" Dec 12 17:27:56.323490 kubelet[3528]: E1212 17:27:56.322991 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5phxm" podUID="5ad89cf6-178c-4c89-9906-56d3d4e0dba0" Dec 12 17:27:56.613300 kubelet[3528]: I1212 17:27:56.613074 3528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6cfb77479d-fq2xm" podStartSLOduration=2.55684775 podStartE2EDuration="4.613054072s" podCreationTimestamp="2025-12-12 17:27:52 +0000 UTC" firstStartedPulling="2025-12-12 17:27:53.475071385 +0000 UTC m=+34.439352808" lastFinishedPulling="2025-12-12 17:27:55.531277707 +0000 UTC m=+36.495559130" observedRunningTime="2025-12-12 17:27:56.612604624 +0000 UTC m=+37.576886047" watchObservedRunningTime="2025-12-12 17:27:56.613054072 +0000 UTC m=+37.577335507" Dec 12 17:27:56.651722 kubelet[3528]: E1212 17:27:56.651571 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.652829 kubelet[3528]: W1212 17:27:56.652648 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.652829 kubelet[3528]: E1212 17:27:56.652746 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.657183 kubelet[3528]: E1212 17:27:56.657096 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.657770 kubelet[3528]: W1212 17:27:56.657632 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.658522 kubelet[3528]: E1212 17:27:56.658428 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.662719 kubelet[3528]: E1212 17:27:56.662471 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.662719 kubelet[3528]: W1212 17:27:56.662509 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.662719 kubelet[3528]: E1212 17:27:56.662543 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.664862 kubelet[3528]: E1212 17:27:56.664736 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.666047 kubelet[3528]: W1212 17:27:56.665213 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.666047 kubelet[3528]: E1212 17:27:56.665262 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.668991 kubelet[3528]: E1212 17:27:56.668792 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.668991 kubelet[3528]: W1212 17:27:56.668842 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.668991 kubelet[3528]: E1212 17:27:56.668877 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.670745 kubelet[3528]: E1212 17:27:56.670125 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.670745 kubelet[3528]: W1212 17:27:56.670161 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.670745 kubelet[3528]: E1212 17:27:56.670192 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.671892 kubelet[3528]: E1212 17:27:56.671847 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.672802 kubelet[3528]: W1212 17:27:56.672039 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.672802 kubelet[3528]: E1212 17:27:56.672081 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.674922 kubelet[3528]: E1212 17:27:56.674879 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.675308 kubelet[3528]: W1212 17:27:56.675132 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.675308 kubelet[3528]: E1212 17:27:56.675176 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.677883 kubelet[3528]: E1212 17:27:56.677842 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.678234 kubelet[3528]: W1212 17:27:56.678056 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.678234 kubelet[3528]: E1212 17:27:56.678100 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.679080 kubelet[3528]: E1212 17:27:56.678929 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.679080 kubelet[3528]: W1212 17:27:56.678961 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.679080 kubelet[3528]: E1212 17:27:56.678992 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.680719 kubelet[3528]: E1212 17:27:56.680377 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.680719 kubelet[3528]: W1212 17:27:56.680601 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.680719 kubelet[3528]: E1212 17:27:56.680642 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.681697 kubelet[3528]: E1212 17:27:56.681524 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.681697 kubelet[3528]: W1212 17:27:56.681555 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.681697 kubelet[3528]: E1212 17:27:56.681586 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.682472 kubelet[3528]: E1212 17:27:56.682239 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.682472 kubelet[3528]: W1212 17:27:56.682267 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.682472 kubelet[3528]: E1212 17:27:56.682292 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.684795 kubelet[3528]: E1212 17:27:56.684191 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.684795 kubelet[3528]: W1212 17:27:56.684229 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.684795 kubelet[3528]: E1212 17:27:56.684277 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.686079 kubelet[3528]: E1212 17:27:56.685846 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.686760 kubelet[3528]: W1212 17:27:56.686559 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.687276 kubelet[3528]: E1212 17:27:56.687142 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.689331 kubelet[3528]: E1212 17:27:56.689080 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.689998 kubelet[3528]: W1212 17:27:56.689614 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.689998 kubelet[3528]: E1212 17:27:56.689868 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.691820 kubelet[3528]: E1212 17:27:56.691366 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.691820 kubelet[3528]: W1212 17:27:56.691401 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.691820 kubelet[3528]: E1212 17:27:56.691434 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.694931 kubelet[3528]: E1212 17:27:56.694874 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.695340 kubelet[3528]: W1212 17:27:56.695021 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.695340 kubelet[3528]: E1212 17:27:56.695059 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.696858 kubelet[3528]: E1212 17:27:56.696639 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.697339 kubelet[3528]: W1212 17:27:56.696948 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.697339 kubelet[3528]: E1212 17:27:56.696986 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.698469 kubelet[3528]: E1212 17:27:56.698439 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.700059 kubelet[3528]: W1212 17:27:56.699359 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.700059 kubelet[3528]: E1212 17:27:56.699828 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.701481 kubelet[3528]: E1212 17:27:56.701445 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.702129 kubelet[3528]: W1212 17:27:56.701856 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.702129 kubelet[3528]: E1212 17:27:56.701928 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.703393 kubelet[3528]: E1212 17:27:56.702820 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.703393 kubelet[3528]: W1212 17:27:56.702850 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.703393 kubelet[3528]: E1212 17:27:56.702882 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.704314 kubelet[3528]: E1212 17:27:56.704274 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.704739 kubelet[3528]: W1212 17:27:56.704496 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.704739 kubelet[3528]: E1212 17:27:56.704541 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.705803 kubelet[3528]: E1212 17:27:56.705459 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.706499 kubelet[3528]: W1212 17:27:56.705954 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.706499 kubelet[3528]: E1212 17:27:56.705997 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.707607 kubelet[3528]: E1212 17:27:56.707478 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.708496 kubelet[3528]: W1212 17:27:56.707975 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.708496 kubelet[3528]: E1212 17:27:56.708125 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.709459 kubelet[3528]: E1212 17:27:56.709422 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.709702 kubelet[3528]: W1212 17:27:56.709592 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.709702 kubelet[3528]: E1212 17:27:56.709633 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.711361 kubelet[3528]: E1212 17:27:56.711287 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.711361 kubelet[3528]: W1212 17:27:56.711350 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.711648 kubelet[3528]: E1212 17:27:56.711389 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.712861 kubelet[3528]: E1212 17:27:56.712756 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.713283 kubelet[3528]: W1212 17:27:56.713130 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.714066 kubelet[3528]: E1212 17:27:56.713326 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.714409 kubelet[3528]: E1212 17:27:56.714365 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.714903 kubelet[3528]: W1212 17:27:56.714403 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.714903 kubelet[3528]: E1212 17:27:56.714741 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.717712 kubelet[3528]: E1212 17:27:56.716450 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.717712 kubelet[3528]: W1212 17:27:56.716486 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.717712 kubelet[3528]: E1212 17:27:56.716519 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.718352 kubelet[3528]: E1212 17:27:56.718247 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.719000 audit[4211]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=4211 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:56.719000 audit[4211]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd7c3bb00 a2=0 a3=1 items=0 ppid=3675 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:56.719000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:56.721153 kubelet[3528]: W1212 17:27:56.718737 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.721153 kubelet[3528]: E1212 17:27:56.718783 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.721740 kubelet[3528]: E1212 17:27:56.721707 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.722009 kubelet[3528]: W1212 17:27:56.721980 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.722165 kubelet[3528]: E1212 17:27:56.722137 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.722783 kubelet[3528]: E1212 17:27:56.722749 3528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:56.722942 kubelet[3528]: W1212 17:27:56.722915 3528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:56.723172 kubelet[3528]: E1212 17:27:56.723036 3528 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:56.724000 audit[4211]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=4211 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:56.724000 audit[4211]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffd7c3bb00 a2=0 a3=1 items=0 ppid=3675 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:56.724000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:56.849311 containerd[1911]: time="2025-12-12T17:27:56.849229769Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:56.852264 containerd[1911]: time="2025-12-12T17:27:56.852173105Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:56.854855 containerd[1911]: time="2025-12-12T17:27:56.854773241Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:56.861356 containerd[1911]: time="2025-12-12T17:27:56.861073589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:56.863248 containerd[1911]: time="2025-12-12T17:27:56.862502561Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.329332982s" Dec 12 17:27:56.865560 containerd[1911]: time="2025-12-12T17:27:56.863418149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 12 17:27:56.877812 containerd[1911]: time="2025-12-12T17:27:56.877760213Z" level=info msg="CreateContainer within sandbox \"eb6da54ee02007cdeba889c55bf51033da313e833ebeecce50d2ae26faba0ac8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 17:27:56.897033 containerd[1911]: time="2025-12-12T17:27:56.896963106Z" level=info msg="Container d035e15390f35239ee7307fe5596f002e26867055eec3f4f874ca7ccb61cab85: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:27:56.907447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1428310998.mount: Deactivated successfully. Dec 12 17:27:56.922700 containerd[1911]: time="2025-12-12T17:27:56.922536510Z" level=info msg="CreateContainer within sandbox \"eb6da54ee02007cdeba889c55bf51033da313e833ebeecce50d2ae26faba0ac8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d035e15390f35239ee7307fe5596f002e26867055eec3f4f874ca7ccb61cab85\"" Dec 12 17:27:56.925553 containerd[1911]: time="2025-12-12T17:27:56.923629986Z" level=info msg="StartContainer for \"d035e15390f35239ee7307fe5596f002e26867055eec3f4f874ca7ccb61cab85\"" Dec 12 17:27:56.929762 containerd[1911]: time="2025-12-12T17:27:56.929698518Z" level=info msg="connecting to shim d035e15390f35239ee7307fe5596f002e26867055eec3f4f874ca7ccb61cab85" address="unix:///run/containerd/s/33283d0e0817781778cca616bef90793eadba47f6131a547cf6d2d7d8e3ef99c" protocol=ttrpc version=3 Dec 12 17:27:56.974052 systemd[1]: Started cri-containerd-d035e15390f35239ee7307fe5596f002e26867055eec3f4f874ca7ccb61cab85.scope - libcontainer container d035e15390f35239ee7307fe5596f002e26867055eec3f4f874ca7ccb61cab85. Dec 12 17:27:57.060000 audit: BPF prog-id=172 op=LOAD Dec 12 17:27:57.060000 audit[4230]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=4096 pid=4230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:57.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430333565313533393066333532333965653733303766653535393666 Dec 12 17:27:57.060000 audit: BPF prog-id=173 op=LOAD Dec 12 17:27:57.060000 audit[4230]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=4096 pid=4230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:57.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430333565313533393066333532333965653733303766653535393666 Dec 12 17:27:57.060000 audit: BPF prog-id=173 op=UNLOAD Dec 12 17:27:57.060000 audit[4230]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4096 pid=4230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:57.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430333565313533393066333532333965653733303766653535393666 Dec 12 17:27:57.060000 audit: BPF prog-id=172 op=UNLOAD Dec 12 17:27:57.060000 audit[4230]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4096 pid=4230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:57.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430333565313533393066333532333965653733303766653535393666 Dec 12 17:27:57.060000 audit: BPF prog-id=174 op=LOAD Dec 12 17:27:57.060000 audit[4230]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=4096 pid=4230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:57.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430333565313533393066333532333965653733303766653535393666 Dec 12 17:27:57.107703 containerd[1911]: time="2025-12-12T17:27:57.107515635Z" level=info msg="StartContainer for \"d035e15390f35239ee7307fe5596f002e26867055eec3f4f874ca7ccb61cab85\" returns successfully" Dec 12 17:27:57.142295 systemd[1]: cri-containerd-d035e15390f35239ee7307fe5596f002e26867055eec3f4f874ca7ccb61cab85.scope: Deactivated successfully. Dec 12 17:27:57.146000 audit: BPF prog-id=174 op=UNLOAD Dec 12 17:27:57.148912 containerd[1911]: time="2025-12-12T17:27:57.148840875Z" level=info msg="received container exit event container_id:\"d035e15390f35239ee7307fe5596f002e26867055eec3f4f874ca7ccb61cab85\" id:\"d035e15390f35239ee7307fe5596f002e26867055eec3f4f874ca7ccb61cab85\" pid:4243 exited_at:{seconds:1765560477 nanos:148162167}" Dec 12 17:27:57.199113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d035e15390f35239ee7307fe5596f002e26867055eec3f4f874ca7ccb61cab85-rootfs.mount: Deactivated successfully. Dec 12 17:27:57.599727 containerd[1911]: time="2025-12-12T17:27:57.598850045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 17:27:58.323302 kubelet[3528]: E1212 17:27:58.323206 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5phxm" podUID="5ad89cf6-178c-4c89-9906-56d3d4e0dba0" Dec 12 17:28:00.323219 kubelet[3528]: E1212 17:28:00.323131 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5phxm" podUID="5ad89cf6-178c-4c89-9906-56d3d4e0dba0" Dec 12 17:28:00.626590 containerd[1911]: time="2025-12-12T17:28:00.626536124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:00.628175 containerd[1911]: time="2025-12-12T17:28:00.627419408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 12 17:28:00.628913 containerd[1911]: time="2025-12-12T17:28:00.628867124Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:00.633372 containerd[1911]: time="2025-12-12T17:28:00.633319520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:00.634850 containerd[1911]: time="2025-12-12T17:28:00.634754000Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.035845887s" Dec 12 17:28:00.635072 containerd[1911]: time="2025-12-12T17:28:00.635035736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 12 17:28:00.643105 containerd[1911]: time="2025-12-12T17:28:00.643054472Z" level=info msg="CreateContainer within sandbox \"eb6da54ee02007cdeba889c55bf51033da313e833ebeecce50d2ae26faba0ac8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 17:28:00.658345 containerd[1911]: time="2025-12-12T17:28:00.658265636Z" level=info msg="Container 8b4f9cd32456c7fa0a750d6ad2ac5be8bf3981f02dfd5be952eace30cf95a8cb: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:28:00.669407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3025682392.mount: Deactivated successfully. Dec 12 17:28:00.678223 containerd[1911]: time="2025-12-12T17:28:00.678171176Z" level=info msg="CreateContainer within sandbox \"eb6da54ee02007cdeba889c55bf51033da313e833ebeecce50d2ae26faba0ac8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8b4f9cd32456c7fa0a750d6ad2ac5be8bf3981f02dfd5be952eace30cf95a8cb\"" Dec 12 17:28:00.681312 containerd[1911]: time="2025-12-12T17:28:00.681239396Z" level=info msg="StartContainer for \"8b4f9cd32456c7fa0a750d6ad2ac5be8bf3981f02dfd5be952eace30cf95a8cb\"" Dec 12 17:28:00.684830 containerd[1911]: time="2025-12-12T17:28:00.684601280Z" level=info msg="connecting to shim 8b4f9cd32456c7fa0a750d6ad2ac5be8bf3981f02dfd5be952eace30cf95a8cb" address="unix:///run/containerd/s/33283d0e0817781778cca616bef90793eadba47f6131a547cf6d2d7d8e3ef99c" protocol=ttrpc version=3 Dec 12 17:28:00.728058 systemd[1]: Started cri-containerd-8b4f9cd32456c7fa0a750d6ad2ac5be8bf3981f02dfd5be952eace30cf95a8cb.scope - libcontainer container 8b4f9cd32456c7fa0a750d6ad2ac5be8bf3981f02dfd5be952eace30cf95a8cb. Dec 12 17:28:00.815626 kernel: kauditd_printk_skb: 84 callbacks suppressed Dec 12 17:28:00.815787 kernel: audit: type=1334 audit(1765560480.812:585): prog-id=175 op=LOAD Dec 12 17:28:00.812000 audit: BPF prog-id=175 op=LOAD Dec 12 17:28:00.812000 audit[4291]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4096 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:00.822487 kernel: audit: type=1300 audit(1765560480.812:585): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4096 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:00.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862346639636433323435366337666130613735306436616432616335 Dec 12 17:28:00.828830 kernel: audit: type=1327 audit(1765560480.812:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862346639636433323435366337666130613735306436616432616335 Dec 12 17:28:00.812000 audit: BPF prog-id=176 op=LOAD Dec 12 17:28:00.830993 kernel: audit: type=1334 audit(1765560480.812:586): prog-id=176 op=LOAD Dec 12 17:28:00.837343 kernel: audit: type=1300 audit(1765560480.812:586): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4096 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:00.812000 audit[4291]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4096 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:00.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862346639636433323435366337666130613735306436616432616335 Dec 12 17:28:00.843844 kernel: audit: type=1327 audit(1765560480.812:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862346639636433323435366337666130613735306436616432616335 Dec 12 17:28:00.815000 audit: BPF prog-id=176 op=UNLOAD Dec 12 17:28:00.846018 kernel: audit: type=1334 audit(1765560480.815:587): prog-id=176 op=UNLOAD Dec 12 17:28:00.815000 audit[4291]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4096 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:00.863387 kernel: audit: type=1300 audit(1765560480.815:587): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4096 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:00.863613 kernel: audit: type=1327 audit(1765560480.815:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862346639636433323435366337666130613735306436616432616335 Dec 12 17:28:00.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862346639636433323435366337666130613735306436616432616335 Dec 12 17:28:00.869931 kernel: audit: type=1334 audit(1765560480.815:588): prog-id=175 op=UNLOAD Dec 12 17:28:00.815000 audit: BPF prog-id=175 op=UNLOAD Dec 12 17:28:00.815000 audit[4291]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4096 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:00.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862346639636433323435366337666130613735306436616432616335 Dec 12 17:28:00.815000 audit: BPF prog-id=177 op=LOAD Dec 12 17:28:00.815000 audit[4291]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4096 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:00.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862346639636433323435366337666130613735306436616432616335 Dec 12 17:28:00.892823 containerd[1911]: time="2025-12-12T17:28:00.892210425Z" level=info msg="StartContainer for \"8b4f9cd32456c7fa0a750d6ad2ac5be8bf3981f02dfd5be952eace30cf95a8cb\" returns successfully" Dec 12 17:28:01.893951 containerd[1911]: time="2025-12-12T17:28:01.893864446Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Dec 12 17:28:01.898380 systemd[1]: cri-containerd-8b4f9cd32456c7fa0a750d6ad2ac5be8bf3981f02dfd5be952eace30cf95a8cb.scope: Deactivated successfully. Dec 12 17:28:01.899653 systemd[1]: cri-containerd-8b4f9cd32456c7fa0a750d6ad2ac5be8bf3981f02dfd5be952eace30cf95a8cb.scope: Consumed 977ms CPU time, 186.1M memory peak, 165.9M written to disk. Dec 12 17:28:01.902000 audit: BPF prog-id=177 op=UNLOAD Dec 12 17:28:01.904493 containerd[1911]: time="2025-12-12T17:28:01.904322098Z" level=info msg="received container exit event container_id:\"8b4f9cd32456c7fa0a750d6ad2ac5be8bf3981f02dfd5be952eace30cf95a8cb\" id:\"8b4f9cd32456c7fa0a750d6ad2ac5be8bf3981f02dfd5be952eace30cf95a8cb\" pid:4303 exited_at:{seconds:1765560481 nanos:903839230}" Dec 12 17:28:01.954568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b4f9cd32456c7fa0a750d6ad2ac5be8bf3981f02dfd5be952eace30cf95a8cb-rootfs.mount: Deactivated successfully. Dec 12 17:28:01.998135 kubelet[3528]: I1212 17:28:01.996976 3528 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 17:28:02.127633 systemd[1]: Created slice kubepods-burstable-podee3e2608_b96a_4e33_97f4_50403b2c2ff6.slice - libcontainer container kubepods-burstable-podee3e2608_b96a_4e33_97f4_50403b2c2ff6.slice. Dec 12 17:28:02.159212 kubelet[3528]: I1212 17:28:02.145428 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr2vm\" (UniqueName: \"kubernetes.io/projected/c263292c-2d61-41bc-b009-e2278ae54431-kube-api-access-nr2vm\") pod \"coredns-674b8bbfcf-h6vwb\" (UID: \"c263292c-2d61-41bc-b009-e2278ae54431\") " pod="kube-system/coredns-674b8bbfcf-h6vwb" Dec 12 17:28:02.159212 kubelet[3528]: I1212 17:28:02.145502 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vjs5\" (UniqueName: \"kubernetes.io/projected/ee3e2608-b96a-4e33-97f4-50403b2c2ff6-kube-api-access-6vjs5\") pod \"coredns-674b8bbfcf-v8vwg\" (UID: \"ee3e2608-b96a-4e33-97f4-50403b2c2ff6\") " pod="kube-system/coredns-674b8bbfcf-v8vwg" Dec 12 17:28:02.159212 kubelet[3528]: I1212 17:28:02.145550 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvxv8\" (UniqueName: \"kubernetes.io/projected/4f54519e-b15c-42cf-aa0f-f8649bda1c94-kube-api-access-gvxv8\") pod \"calico-apiserver-7b5446659b-48cdq\" (UID: \"4f54519e-b15c-42cf-aa0f-f8649bda1c94\") " pod="calico-apiserver/calico-apiserver-7b5446659b-48cdq" Dec 12 17:28:02.159212 kubelet[3528]: I1212 17:28:02.145591 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee3e2608-b96a-4e33-97f4-50403b2c2ff6-config-volume\") pod \"coredns-674b8bbfcf-v8vwg\" (UID: \"ee3e2608-b96a-4e33-97f4-50403b2c2ff6\") " pod="kube-system/coredns-674b8bbfcf-v8vwg" Dec 12 17:28:02.159212 kubelet[3528]: I1212 17:28:02.145634 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c263292c-2d61-41bc-b009-e2278ae54431-config-volume\") pod \"coredns-674b8bbfcf-h6vwb\" (UID: \"c263292c-2d61-41bc-b009-e2278ae54431\") " pod="kube-system/coredns-674b8bbfcf-h6vwb" Dec 12 17:28:02.161146 kubelet[3528]: I1212 17:28:02.145817 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4f54519e-b15c-42cf-aa0f-f8649bda1c94-calico-apiserver-certs\") pod \"calico-apiserver-7b5446659b-48cdq\" (UID: \"4f54519e-b15c-42cf-aa0f-f8649bda1c94\") " pod="calico-apiserver/calico-apiserver-7b5446659b-48cdq" Dec 12 17:28:02.162817 systemd[1]: Created slice kubepods-burstable-podc263292c_2d61_41bc_b009_e2278ae54431.slice - libcontainer container kubepods-burstable-podc263292c_2d61_41bc_b009_e2278ae54431.slice. Dec 12 17:28:02.198212 systemd[1]: Created slice kubepods-besteffort-pod4f54519e_b15c_42cf_aa0f_f8649bda1c94.slice - libcontainer container kubepods-besteffort-pod4f54519e_b15c_42cf_aa0f_f8649bda1c94.slice. Dec 12 17:28:02.247247 kubelet[3528]: I1212 17:28:02.247174 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b11dd849-e38b-40e1-a2d3-0061a9f777d6-tigera-ca-bundle\") pod \"calico-kube-controllers-6755b5785f-427k6\" (UID: \"b11dd849-e38b-40e1-a2d3-0061a9f777d6\") " pod="calico-system/calico-kube-controllers-6755b5785f-427k6" Dec 12 17:28:02.249770 kubelet[3528]: I1212 17:28:02.247448 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq5rm\" (UniqueName: \"kubernetes.io/projected/b11dd849-e38b-40e1-a2d3-0061a9f777d6-kube-api-access-pq5rm\") pod \"calico-kube-controllers-6755b5785f-427k6\" (UID: \"b11dd849-e38b-40e1-a2d3-0061a9f777d6\") " pod="calico-system/calico-kube-controllers-6755b5785f-427k6" Dec 12 17:28:02.345542 systemd[1]: Created slice kubepods-besteffort-podb11dd849_e38b_40e1_a2d3_0061a9f777d6.slice - libcontainer container kubepods-besteffort-podb11dd849_e38b_40e1_a2d3_0061a9f777d6.slice. Dec 12 17:28:02.351221 kubelet[3528]: I1212 17:28:02.348549 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4993f81d-df62-4e56-b3d3-f820e3c156d6-goldmane-key-pair\") pod \"goldmane-666569f655-fxswz\" (UID: \"4993f81d-df62-4e56-b3d3-f820e3c156d6\") " pod="calico-system/goldmane-666569f655-fxswz" Dec 12 17:28:02.351221 kubelet[3528]: I1212 17:28:02.348627 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4993f81d-df62-4e56-b3d3-f820e3c156d6-goldmane-ca-bundle\") pod \"goldmane-666569f655-fxswz\" (UID: \"4993f81d-df62-4e56-b3d3-f820e3c156d6\") " pod="calico-system/goldmane-666569f655-fxswz" Dec 12 17:28:02.351221 kubelet[3528]: I1212 17:28:02.349168 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4993f81d-df62-4e56-b3d3-f820e3c156d6-config\") pod \"goldmane-666569f655-fxswz\" (UID: \"4993f81d-df62-4e56-b3d3-f820e3c156d6\") " pod="calico-system/goldmane-666569f655-fxswz" Dec 12 17:28:02.351221 kubelet[3528]: I1212 17:28:02.349222 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6zf6\" (UniqueName: \"kubernetes.io/projected/4993f81d-df62-4e56-b3d3-f820e3c156d6-kube-api-access-v6zf6\") pod \"goldmane-666569f655-fxswz\" (UID: \"4993f81d-df62-4e56-b3d3-f820e3c156d6\") " pod="calico-system/goldmane-666569f655-fxswz" Dec 12 17:28:02.380206 systemd[1]: Created slice kubepods-besteffort-pod4993f81d_df62_4e56_b3d3_f820e3c156d6.slice - libcontainer container kubepods-besteffort-pod4993f81d_df62_4e56_b3d3_f820e3c156d6.slice. Dec 12 17:28:02.424196 systemd[1]: Created slice kubepods-besteffort-pod464adca8_6b09_4273_9180_6050c84a6f28.slice - libcontainer container kubepods-besteffort-pod464adca8_6b09_4273_9180_6050c84a6f28.slice. Dec 12 17:28:02.451335 kubelet[3528]: I1212 17:28:02.449626 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/886537c6-3a97-4f12-b3ec-292f96a32a3f-whisker-ca-bundle\") pod \"whisker-68dd9f4cf6-xrsdh\" (UID: \"886537c6-3a97-4f12-b3ec-292f96a32a3f\") " pod="calico-system/whisker-68dd9f4cf6-xrsdh" Dec 12 17:28:02.451335 kubelet[3528]: I1212 17:28:02.449777 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k6xw\" (UniqueName: \"kubernetes.io/projected/886537c6-3a97-4f12-b3ec-292f96a32a3f-kube-api-access-4k6xw\") pod \"whisker-68dd9f4cf6-xrsdh\" (UID: \"886537c6-3a97-4f12-b3ec-292f96a32a3f\") " pod="calico-system/whisker-68dd9f4cf6-xrsdh" Dec 12 17:28:02.451335 kubelet[3528]: I1212 17:28:02.449878 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/886537c6-3a97-4f12-b3ec-292f96a32a3f-whisker-backend-key-pair\") pod \"whisker-68dd9f4cf6-xrsdh\" (UID: \"886537c6-3a97-4f12-b3ec-292f96a32a3f\") " pod="calico-system/whisker-68dd9f4cf6-xrsdh" Dec 12 17:28:02.451335 kubelet[3528]: I1212 17:28:02.450014 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc92w\" (UniqueName: \"kubernetes.io/projected/464adca8-6b09-4273-9180-6050c84a6f28-kube-api-access-jc92w\") pod \"calico-apiserver-7b5446659b-7cz6s\" (UID: \"464adca8-6b09-4273-9180-6050c84a6f28\") " pod="calico-apiserver/calico-apiserver-7b5446659b-7cz6s" Dec 12 17:28:02.451335 kubelet[3528]: I1212 17:28:02.450063 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/464adca8-6b09-4273-9180-6050c84a6f28-calico-apiserver-certs\") pod \"calico-apiserver-7b5446659b-7cz6s\" (UID: \"464adca8-6b09-4273-9180-6050c84a6f28\") " pod="calico-apiserver/calico-apiserver-7b5446659b-7cz6s" Dec 12 17:28:02.450970 systemd[1]: Created slice kubepods-besteffort-pod5ad89cf6_178c_4c89_9906_56d3d4e0dba0.slice - libcontainer container kubepods-besteffort-pod5ad89cf6_178c_4c89_9906_56d3d4e0dba0.slice. Dec 12 17:28:02.471067 containerd[1911]: time="2025-12-12T17:28:02.470832921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v8vwg,Uid:ee3e2608-b96a-4e33-97f4-50403b2c2ff6,Namespace:kube-system,Attempt:0,}" Dec 12 17:28:02.476825 containerd[1911]: time="2025-12-12T17:28:02.475906209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5phxm,Uid:5ad89cf6-178c-4c89-9906-56d3d4e0dba0,Namespace:calico-system,Attempt:0,}" Dec 12 17:28:02.476825 containerd[1911]: time="2025-12-12T17:28:02.476728449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h6vwb,Uid:c263292c-2d61-41bc-b009-e2278ae54431,Namespace:kube-system,Attempt:0,}" Dec 12 17:28:02.486911 systemd[1]: Created slice kubepods-besteffort-pod886537c6_3a97_4f12_b3ec_292f96a32a3f.slice - libcontainer container kubepods-besteffort-pod886537c6_3a97_4f12_b3ec_292f96a32a3f.slice. Dec 12 17:28:02.507200 containerd[1911]: time="2025-12-12T17:28:02.507128325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5446659b-48cdq,Uid:4f54519e-b15c-42cf-aa0f-f8649bda1c94,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:28:02.646713 containerd[1911]: time="2025-12-12T17:28:02.645826054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 17:28:02.656173 containerd[1911]: time="2025-12-12T17:28:02.656002966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6755b5785f-427k6,Uid:b11dd849-e38b-40e1-a2d3-0061a9f777d6,Namespace:calico-system,Attempt:0,}" Dec 12 17:28:02.714025 containerd[1911]: time="2025-12-12T17:28:02.713833570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fxswz,Uid:4993f81d-df62-4e56-b3d3-f820e3c156d6,Namespace:calico-system,Attempt:0,}" Dec 12 17:28:02.755432 containerd[1911]: time="2025-12-12T17:28:02.754977671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5446659b-7cz6s,Uid:464adca8-6b09-4273-9180-6050c84a6f28,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:28:02.822411 containerd[1911]: time="2025-12-12T17:28:02.822339779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68dd9f4cf6-xrsdh,Uid:886537c6-3a97-4f12-b3ec-292f96a32a3f,Namespace:calico-system,Attempt:0,}" Dec 12 17:28:03.097985 containerd[1911]: time="2025-12-12T17:28:03.097790900Z" level=error msg="Failed to destroy network for sandbox \"c17cc5fde181d98732603b66021f00e8c80f62d53c23f4dc9e40bee620f18fb9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:28:03.105730 systemd[1]: run-netns-cni\x2d49eeafac\x2d3b9a\x2d6244\x2d352e\x2d4b8d16e4f042.mount: Deactivated successfully. Dec 12 17:28:03.107782 containerd[1911]: time="2025-12-12T17:28:03.107697020Z" level=error msg="Failed to destroy network for sandbox \"d61692138d6d235058986a50fb117a69dc72f5258f0317a3606f172d73ba9836\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:28:03.113298 systemd[1]: run-netns-cni\x2d733eb486\x2d158f\x2d427b\x2d0446\x2d02c2fbd92d12.mount: Deactivated successfully. Dec 12 17:28:03.119150 containerd[1911]: time="2025-12-12T17:28:03.118645304Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5phxm,Uid:5ad89cf6-178c-4c89-9906-56d3d4e0dba0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c17cc5fde181d98732603b66021f00e8c80f62d53c23f4dc9e40bee620f18fb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:28:03.120925 kubelet[3528]: E1212 17:28:03.120842 3528 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c17cc5fde181d98732603b66021f00e8c80f62d53c23f4dc9e40bee620f18fb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:28:03.121715 kubelet[3528]: E1212 17:28:03.120954 3528 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c17cc5fde181d98732603b66021f00e8c80f62d53c23f4dc9e40bee620f18fb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5phxm" Dec 12 17:28:03.121715 kubelet[3528]: E1212 17:28:03.120991 3528 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c17cc5fde181d98732603b66021f00e8c80f62d53c23f4dc9e40bee620f18fb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5phxm" Dec 12 17:28:03.121715 kubelet[3528]: E1212 17:28:03.121068 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5phxm_calico-system(5ad89cf6-178c-4c89-9906-56d3d4e0dba0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5phxm_calico-system(5ad89cf6-178c-4c89-9906-56d3d4e0dba0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c17cc5fde181d98732603b66021f00e8c80f62d53c23f4dc9e40bee620f18fb9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5phxm" podUID="5ad89cf6-178c-4c89-9906-56d3d4e0dba0" Dec 12 17:28:03.126953 containerd[1911]: time="2025-12-12T17:28:03.126873981Z" level=error msg="Failed to destroy network for sandbox \"66a6f9be9883fa2fd7581bc09bb79b858d3563c1e40b189f1354b4b8107e015f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:28:03.132291 systemd[1]: run-netns-cni\x2d8c01da06\x2d2c09\x2d567e\x2d3f8c\x2dd3660ac87f0a.mount: Deactivated successfully. Dec 12 17:28:03.137554 containerd[1911]: time="2025-12-12T17:28:03.136807413Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6755b5785f-427k6,Uid:b11dd849-e38b-40e1-a2d3-0061a9f777d6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d61692138d6d235058986a50fb117a69dc72f5258f0317a3606f172d73ba9836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:28:03.140027 kubelet[3528]: E1212 17:28:03.139918 3528 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d61692138d6d235058986a50fb117a69dc72f5258f0317a3606f172d73ba9836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:28:03.140027 kubelet[3528]: E1212 17:28:03.140007 3528 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d61692138d6d235058986a50fb117a69dc72f5258f0317a3606f172d73ba9836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6755b5785f-427k6" Dec 12 17:28:03.140436 kubelet[3528]: E1212 17:28:03.140043 3528 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d61692138d6d235058986a50fb117a69dc72f5258f0317a3606f172d73ba9836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6755b5785f-427k6" Dec 12 17:28:03.141613 kubelet[3528]: E1212 17:28:03.140304 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6755b5785f-427k6_calico-system(b11dd849-e38b-40e1-a2d3-0061a9f777d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6755b5785f-427k6_calico-system(b11dd849-e38b-40e1-a2d3-0061a9f777d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d61692138d6d235058986a50fb117a69dc72f5258f0317a3606f172d73ba9836\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6755b5785f-427k6" podUID="b11dd849-e38b-40e1-a2d3-0061a9f777d6" Dec 12 17:28:03.143509 containerd[1911]: time="2025-12-12T17:28:03.141738297Z" level=error msg="Failed to destroy network for sandbox \"542d78505afcc3a936e7a1d17f7ffb8138bb5c0fe60969aafe8f0c0633b2b366\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:28:03.149218 systemd[1]: run-netns-cni\x2d4649184d\x2dab97\x2def9b\x2d74e9\x2de3dc7adf5650.mount: Deactivated successfully. Dec 12 17:28:03.162454 containerd[1911]: time="2025-12-12T17:28:03.162355989Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h6vwb,Uid:c263292c-2d61-41bc-b009-e2278ae54431,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"542d78505afcc3a936e7a1d17f7ffb8138bb5c0fe60969aafe8f0c0633b2b366\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:28:03.167118 containerd[1911]: time="2025-12-12T17:28:03.166424841Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v8vwg,Uid:ee3e2608-b96a-4e33-97f4-50403b2c2ff6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"66a6f9be9883fa2fd7581bc09bb79b858d3563c1e40b189f1354b4b8107e015f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:28:03.167835 kubelet[3528]: E1212 17:28:03.167394 3528 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66a6f9be9883fa2fd7581bc09bb79b858d3563c1e40b189f1354b4b8107e015f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:28:03.167835 kubelet[3528]: E1212 17:28:03.167332 3528 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"542d78505afcc3a936e7a1d17f7ffb8138bb5c0fe60969aafe8f0c0633b2b366\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:28:03.167835 kubelet[3528]: E1212 17:28:03.167518 3528 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"542d78505afcc3a936e7a1d17f7ffb8138bb5c0fe60969aafe8f0c0633b2b366\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h6vwb" Dec 12 17:28:03.167835 kubelet[3528]: E1212 17:28:03.167743 3528 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"542d78505afcc3a936e7a1d17f7ffb8138bb5c0fe60969aafe8f0c0633b2b366\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h6vwb" Dec 12 17:28:03.170459 kubelet[3528]: E1212 17:28:03.167581 3528 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66a6f9be9883fa2fd7581bc09bb79b858d3563c1e40b189f1354b4b8107e015f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-v8vwg" Dec 12 17:28:03.170459 kubelet[3528]: E1212 17:28:03.167810 3528 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66a6f9be9883fa2fd7581bc09bb79b858d3563c1e40b189f1354b4b8107e015f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-v8vwg" Dec 12 17:28:03.170459 kubelet[3528]: E1212 17:28:03.167888 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-v8vwg_kube-system(ee3e2608-b96a-4e33-97f4-50403b2c2ff6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-v8vwg_kube-system(ee3e2608-b96a-4e33-97f4-50403b2c2ff6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66a6f9be9883fa2fd7581bc09bb79b858d3563c1e40b189f1354b4b8107e015f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-v8vwg" podUID="ee3e2608-b96a-4e33-97f4-50403b2c2ff6" Dec 12 17:28:03.171906 kubelet[3528]: E1212 17:28:03.169396 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-h6vwb_kube-system(c263292c-2d61-41bc-b009-e2278ae54431)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-h6vwb_kube-system(c263292c-2d61-41bc-b009-e2278ae54431)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"542d78505afcc3a936e7a1d17f7ffb8138bb5c0fe60969aafe8f0c0633b2b366\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-h6vwb" podUID="c263292c-2d61-41bc-b009-e2278ae54431" Dec 12 17:28:03.173055 containerd[1911]: time="2025-12-12T17:28:03.172889205Z" level=error msg="Failed to destroy network for sandbox \"23427dbe8d215a464efcad22df2a2e94807d0a38e186791097807ccf665ce63a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:28:03.179538 containerd[1911]: time="2025-12-12T17:28:03.179415129Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5446659b-48cdq,Uid:4f54519e-b15c-42cf-aa0f-f8649bda1c94,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"23427dbe8d215a464efcad22df2a2e94807d0a38e186791097807ccf665ce63a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:28:03.180244 kubelet[3528]: E1212 17:28:03.180107 3528 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23427dbe8d215a464efcad22df2a2e94807d0a38e186791097807ccf665ce63a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:28:03.180244 kubelet[3528]: E1212 17:28:03.180202 3528 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23427dbe8d215a464efcad22df2a2e94807d0a38e186791097807ccf665ce63a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b5446659b-48cdq" Dec 12 17:28:03.180651 kubelet[3528]: E1212 17:28:03.180240 3528 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23427dbe8d215a464efcad22df2a2e94807d0a38e186791097807ccf665ce63a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b5446659b-48cdq" Dec 12 17:28:03.180651 kubelet[3528]: E1212 17:28:03.180318 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b5446659b-48cdq_calico-apiserver(4f54519e-b15c-42cf-aa0f-f8649bda1c94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b5446659b-48cdq_calico-apiserver(4f54519e-b15c-42cf-aa0f-f8649bda1c94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23427dbe8d215a464efcad22df2a2e94807d0a38e186791097807ccf665ce63a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b5446659b-48cdq" podUID="4f54519e-b15c-42cf-aa0f-f8649bda1c94" Dec 12 17:28:03.210070 containerd[1911]: time="2025-12-12T17:28:03.210005277Z" level=error msg="Failed to destroy network for sandbox \"84a37c53f042bf98c7447906c4cf57eb025213f175ce78746431162e75583c78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:28:03.214890 containerd[1911]: time="2025-12-12T17:28:03.214624653Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fxswz,Uid:4993f81d-df62-4e56-b3d3-f820e3c156d6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"84a37c53f042bf98c7447906c4cf57eb025213f175ce78746431162e75583c78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:28:03.215780 kubelet[3528]: E1212 17:28:03.215718 3528 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84a37c53f042bf98c7447906c4cf57eb025213f175ce78746431162e75583c78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:28:03.216097 kubelet[3528]: E1212 17:28:03.215992 3528 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84a37c53f042bf98c7447906c4cf57eb025213f175ce78746431162e75583c78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-fxswz" Dec 12 17:28:03.216235 kubelet[3528]: E1212 17:28:03.216061 3528 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84a37c53f042bf98c7447906c4cf57eb025213f175ce78746431162e75583c78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-fxswz" Dec 12 17:28:03.216630 kubelet[3528]: E1212 17:28:03.216572 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-fxswz_calico-system(4993f81d-df62-4e56-b3d3-f820e3c156d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-fxswz_calico-system(4993f81d-df62-4e56-b3d3-f820e3c156d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84a37c53f042bf98c7447906c4cf57eb025213f175ce78746431162e75583c78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-fxswz" podUID="4993f81d-df62-4e56-b3d3-f820e3c156d6" Dec 12 17:28:03.254243 containerd[1911]: time="2025-12-12T17:28:03.254068821Z" level=error msg="Failed to destroy network for sandbox \"cd32782a5a795b7da5ad9044ce73f9429a467fe406ee3bc3becb7dc2f41f7ca7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:28:03.255729 containerd[1911]: time="2025-12-12T17:28:03.255643353Z" level=error msg="Failed to destroy network for sandbox \"7c42582a81ef3148f06559bc9b57dd36401e0e6e1dd4c872cb3caf4db9dff84f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:28:03.258501 containerd[1911]: time="2025-12-12T17:28:03.258335925Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68dd9f4cf6-xrsdh,Uid:886537c6-3a97-4f12-b3ec-292f96a32a3f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd32782a5a795b7da5ad9044ce73f9429a467fe406ee3bc3becb7dc2f41f7ca7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:28:03.258945 kubelet[3528]: E1212 17:28:03.258888 3528 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd32782a5a795b7da5ad9044ce73f9429a467fe406ee3bc3becb7dc2f41f7ca7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:28:03.259165 kubelet[3528]: E1212 17:28:03.259121 3528 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd32782a5a795b7da5ad9044ce73f9429a467fe406ee3bc3becb7dc2f41f7ca7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68dd9f4cf6-xrsdh" Dec 12 17:28:03.259358 containerd[1911]: time="2025-12-12T17:28:03.259227189Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5446659b-7cz6s,Uid:464adca8-6b09-4273-9180-6050c84a6f28,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c42582a81ef3148f06559bc9b57dd36401e0e6e1dd4c872cb3caf4db9dff84f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:28:03.259616 kubelet[3528]: E1212 17:28:03.259319 3528 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd32782a5a795b7da5ad9044ce73f9429a467fe406ee3bc3becb7dc2f41f7ca7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68dd9f4cf6-xrsdh" Dec 12 17:28:03.259616 kubelet[3528]: E1212 17:28:03.259512 3528 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c42582a81ef3148f06559bc9b57dd36401e0e6e1dd4c872cb3caf4db9dff84f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:28:03.259616 kubelet[3528]: E1212 17:28:03.259599 3528 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c42582a81ef3148f06559bc9b57dd36401e0e6e1dd4c872cb3caf4db9dff84f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b5446659b-7cz6s" Dec 12 17:28:03.259948 kubelet[3528]: E1212 17:28:03.259633 3528 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c42582a81ef3148f06559bc9b57dd36401e0e6e1dd4c872cb3caf4db9dff84f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b5446659b-7cz6s" Dec 12 17:28:03.259948 kubelet[3528]: E1212 17:28:03.259706 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-68dd9f4cf6-xrsdh_calico-system(886537c6-3a97-4f12-b3ec-292f96a32a3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-68dd9f4cf6-xrsdh_calico-system(886537c6-3a97-4f12-b3ec-292f96a32a3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd32782a5a795b7da5ad9044ce73f9429a467fe406ee3bc3becb7dc2f41f7ca7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-68dd9f4cf6-xrsdh" podUID="886537c6-3a97-4f12-b3ec-292f96a32a3f" Dec 12 17:28:03.259948 kubelet[3528]: E1212 17:28:03.259771 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b5446659b-7cz6s_calico-apiserver(464adca8-6b09-4273-9180-6050c84a6f28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b5446659b-7cz6s_calico-apiserver(464adca8-6b09-4273-9180-6050c84a6f28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c42582a81ef3148f06559bc9b57dd36401e0e6e1dd4c872cb3caf4db9dff84f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b5446659b-7cz6s" podUID="464adca8-6b09-4273-9180-6050c84a6f28" Dec 12 17:28:03.956468 systemd[1]: run-netns-cni\x2d6504b830\x2dd619\x2d6d7a\x2dbc7d\x2dff016945cacc.mount: Deactivated successfully. Dec 12 17:28:03.956636 systemd[1]: run-netns-cni\x2d91517cc5\x2d2240\x2dae28\x2da331\x2d7a4688458e76.mount: Deactivated successfully. Dec 12 17:28:03.957854 systemd[1]: run-netns-cni\x2d3deaf39c\x2d01bd\x2db419\x2dd3ec\x2d0ec39e20dfad.mount: Deactivated successfully. Dec 12 17:28:03.957989 systemd[1]: run-netns-cni\x2d0bd3524c\x2d4166\x2d00ed\x2d9b25\x2d1c5f45bde8c8.mount: Deactivated successfully. Dec 12 17:28:10.068194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3706914356.mount: Deactivated successfully. Dec 12 17:28:10.113724 containerd[1911]: time="2025-12-12T17:28:10.113170059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:10.114857 containerd[1911]: time="2025-12-12T17:28:10.114541143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 12 17:28:10.116054 containerd[1911]: time="2025-12-12T17:28:10.115994571Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:10.119462 containerd[1911]: time="2025-12-12T17:28:10.119380731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:10.121196 containerd[1911]: time="2025-12-12T17:28:10.120463827Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 7.474557301s" Dec 12 17:28:10.121196 containerd[1911]: time="2025-12-12T17:28:10.120554091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 12 17:28:10.173458 containerd[1911]: time="2025-12-12T17:28:10.173386684Z" level=info msg="CreateContainer within sandbox \"eb6da54ee02007cdeba889c55bf51033da313e833ebeecce50d2ae26faba0ac8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 17:28:10.188793 containerd[1911]: time="2025-12-12T17:28:10.188735068Z" level=info msg="Container 42ff3753694e351d0179efeb6569f1f984402b8aa49692fe73a42318100ec6ba: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:28:10.212464 containerd[1911]: time="2025-12-12T17:28:10.212113396Z" level=info msg="CreateContainer within sandbox \"eb6da54ee02007cdeba889c55bf51033da313e833ebeecce50d2ae26faba0ac8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"42ff3753694e351d0179efeb6569f1f984402b8aa49692fe73a42318100ec6ba\"" Dec 12 17:28:10.215579 containerd[1911]: time="2025-12-12T17:28:10.215529964Z" level=info msg="StartContainer for \"42ff3753694e351d0179efeb6569f1f984402b8aa49692fe73a42318100ec6ba\"" Dec 12 17:28:10.219246 containerd[1911]: time="2025-12-12T17:28:10.219189148Z" level=info msg="connecting to shim 42ff3753694e351d0179efeb6569f1f984402b8aa49692fe73a42318100ec6ba" address="unix:///run/containerd/s/33283d0e0817781778cca616bef90793eadba47f6131a547cf6d2d7d8e3ef99c" protocol=ttrpc version=3 Dec 12 17:28:10.294045 systemd[1]: Started cri-containerd-42ff3753694e351d0179efeb6569f1f984402b8aa49692fe73a42318100ec6ba.scope - libcontainer container 42ff3753694e351d0179efeb6569f1f984402b8aa49692fe73a42318100ec6ba. Dec 12 17:28:10.369000 audit: BPF prog-id=178 op=LOAD Dec 12 17:28:10.371522 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 12 17:28:10.371641 kernel: audit: type=1334 audit(1765560490.369:591): prog-id=178 op=LOAD Dec 12 17:28:10.369000 audit[4563]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4096 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:10.379080 kernel: audit: type=1300 audit(1765560490.369:591): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4096 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:10.379206 kernel: audit: type=1327 audit(1765560490.369:591): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432666633373533363934653335316430313739656665623635363966 Dec 12 17:28:10.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432666633373533363934653335316430313739656665623635363966 Dec 12 17:28:10.384894 kernel: audit: type=1334 audit(1765560490.369:592): prog-id=179 op=LOAD Dec 12 17:28:10.369000 audit: BPF prog-id=179 op=LOAD Dec 12 17:28:10.369000 audit[4563]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4096 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:10.392813 kernel: audit: type=1300 audit(1765560490.369:592): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4096 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:10.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432666633373533363934653335316430313739656665623635363966 Dec 12 17:28:10.398820 kernel: audit: type=1327 audit(1765560490.369:592): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432666633373533363934653335316430313739656665623635363966 Dec 12 17:28:10.398967 kernel: audit: type=1334 audit(1765560490.372:593): prog-id=179 op=UNLOAD Dec 12 17:28:10.372000 audit: BPF prog-id=179 op=UNLOAD Dec 12 17:28:10.372000 audit[4563]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4096 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:10.406405 kernel: audit: type=1300 audit(1765560490.372:593): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4096 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:10.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432666633373533363934653335316430313739656665623635363966 Dec 12 17:28:10.413114 kernel: audit: type=1327 audit(1765560490.372:593): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432666633373533363934653335316430313739656665623635363966 Dec 12 17:28:10.372000 audit: BPF prog-id=178 op=UNLOAD Dec 12 17:28:10.415409 kernel: audit: type=1334 audit(1765560490.372:594): prog-id=178 op=UNLOAD Dec 12 17:28:10.372000 audit[4563]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4096 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:10.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432666633373533363934653335316430313739656665623635363966 Dec 12 17:28:10.372000 audit: BPF prog-id=180 op=LOAD Dec 12 17:28:10.372000 audit[4563]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4096 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:10.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432666633373533363934653335316430313739656665623635363966 Dec 12 17:28:10.452763 containerd[1911]: time="2025-12-12T17:28:10.452629157Z" level=info msg="StartContainer for \"42ff3753694e351d0179efeb6569f1f984402b8aa49692fe73a42318100ec6ba\" returns successfully" Dec 12 17:28:10.727786 kubelet[3528]: I1212 17:28:10.727532 3528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8pvkh" podStartSLOduration=2.488082662 podStartE2EDuration="18.72748131s" podCreationTimestamp="2025-12-12 17:27:52 +0000 UTC" firstStartedPulling="2025-12-12 17:27:53.882937287 +0000 UTC m=+34.847218722" lastFinishedPulling="2025-12-12 17:28:10.122335947 +0000 UTC m=+51.086617370" observedRunningTime="2025-12-12 17:28:10.718616814 +0000 UTC m=+51.682898285" watchObservedRunningTime="2025-12-12 17:28:10.72748131 +0000 UTC m=+51.691762817" Dec 12 17:28:10.879119 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 17:28:10.879265 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 17:28:11.247070 kubelet[3528]: I1212 17:28:11.246933 3528 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/886537c6-3a97-4f12-b3ec-292f96a32a3f-whisker-backend-key-pair\") pod \"886537c6-3a97-4f12-b3ec-292f96a32a3f\" (UID: \"886537c6-3a97-4f12-b3ec-292f96a32a3f\") " Dec 12 17:28:11.247070 kubelet[3528]: I1212 17:28:11.247022 3528 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k6xw\" (UniqueName: \"kubernetes.io/projected/886537c6-3a97-4f12-b3ec-292f96a32a3f-kube-api-access-4k6xw\") pod \"886537c6-3a97-4f12-b3ec-292f96a32a3f\" (UID: \"886537c6-3a97-4f12-b3ec-292f96a32a3f\") " Dec 12 17:28:11.247495 kubelet[3528]: I1212 17:28:11.247360 3528 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/886537c6-3a97-4f12-b3ec-292f96a32a3f-whisker-ca-bundle\") pod \"886537c6-3a97-4f12-b3ec-292f96a32a3f\" (UID: \"886537c6-3a97-4f12-b3ec-292f96a32a3f\") " Dec 12 17:28:11.248356 kubelet[3528]: I1212 17:28:11.248315 3528 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/886537c6-3a97-4f12-b3ec-292f96a32a3f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "886537c6-3a97-4f12-b3ec-292f96a32a3f" (UID: "886537c6-3a97-4f12-b3ec-292f96a32a3f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:28:11.261261 systemd[1]: var-lib-kubelet-pods-886537c6\x2d3a97\x2d4f12\x2db3ec\x2d292f96a32a3f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4k6xw.mount: Deactivated successfully. Dec 12 17:28:11.263904 systemd[1]: var-lib-kubelet-pods-886537c6\x2d3a97\x2d4f12\x2db3ec\x2d292f96a32a3f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 17:28:11.266513 kubelet[3528]: I1212 17:28:11.266430 3528 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/886537c6-3a97-4f12-b3ec-292f96a32a3f-kube-api-access-4k6xw" (OuterVolumeSpecName: "kube-api-access-4k6xw") pod "886537c6-3a97-4f12-b3ec-292f96a32a3f" (UID: "886537c6-3a97-4f12-b3ec-292f96a32a3f"). InnerVolumeSpecName "kube-api-access-4k6xw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:28:11.273379 kubelet[3528]: I1212 17:28:11.272686 3528 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/886537c6-3a97-4f12-b3ec-292f96a32a3f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "886537c6-3a97-4f12-b3ec-292f96a32a3f" (UID: "886537c6-3a97-4f12-b3ec-292f96a32a3f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 17:28:11.348872 kubelet[3528]: I1212 17:28:11.348806 3528 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/886537c6-3a97-4f12-b3ec-292f96a32a3f-whisker-backend-key-pair\") on node \"ip-172-31-17-228\" DevicePath \"\"" Dec 12 17:28:11.348872 kubelet[3528]: I1212 17:28:11.348861 3528 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4k6xw\" (UniqueName: \"kubernetes.io/projected/886537c6-3a97-4f12-b3ec-292f96a32a3f-kube-api-access-4k6xw\") on node \"ip-172-31-17-228\" DevicePath \"\"" Dec 12 17:28:11.352964 kubelet[3528]: I1212 17:28:11.348888 3528 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/886537c6-3a97-4f12-b3ec-292f96a32a3f-whisker-ca-bundle\") on node \"ip-172-31-17-228\" DevicePath \"\"" Dec 12 17:28:11.350269 systemd[1]: Removed slice kubepods-besteffort-pod886537c6_3a97_4f12_b3ec_292f96a32a3f.slice - libcontainer container kubepods-besteffort-pod886537c6_3a97_4f12_b3ec_292f96a32a3f.slice. Dec 12 17:28:11.839521 systemd[1]: Created slice kubepods-besteffort-podc3f845e7_b0d1_412d_aad9_3771e0979bfc.slice - libcontainer container kubepods-besteffort-podc3f845e7_b0d1_412d_aad9_3771e0979bfc.slice. Dec 12 17:28:11.853827 kubelet[3528]: I1212 17:28:11.853475 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c3f845e7-b0d1-412d-aad9-3771e0979bfc-whisker-backend-key-pair\") pod \"whisker-6b7d45d746-ssppz\" (UID: \"c3f845e7-b0d1-412d-aad9-3771e0979bfc\") " pod="calico-system/whisker-6b7d45d746-ssppz" Dec 12 17:28:11.855483 kubelet[3528]: I1212 17:28:11.853799 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zbvt\" (UniqueName: \"kubernetes.io/projected/c3f845e7-b0d1-412d-aad9-3771e0979bfc-kube-api-access-8zbvt\") pod \"whisker-6b7d45d746-ssppz\" (UID: \"c3f845e7-b0d1-412d-aad9-3771e0979bfc\") " pod="calico-system/whisker-6b7d45d746-ssppz" Dec 12 17:28:11.855483 kubelet[3528]: I1212 17:28:11.854355 3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3f845e7-b0d1-412d-aad9-3771e0979bfc-whisker-ca-bundle\") pod \"whisker-6b7d45d746-ssppz\" (UID: \"c3f845e7-b0d1-412d-aad9-3771e0979bfc\") " pod="calico-system/whisker-6b7d45d746-ssppz" Dec 12 17:28:12.149521 containerd[1911]: time="2025-12-12T17:28:12.149320205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b7d45d746-ssppz,Uid:c3f845e7-b0d1-412d-aad9-3771e0979bfc,Namespace:calico-system,Attempt:0,}" Dec 12 17:28:13.331518 kubelet[3528]: I1212 17:28:13.330599 3528 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="886537c6-3a97-4f12-b3ec-292f96a32a3f" path="/var/lib/kubelet/pods/886537c6-3a97-4f12-b3ec-292f96a32a3f/volumes" Dec 12 17:28:13.596000 audit: BPF prog-id=181 op=LOAD Dec 12 17:28:13.596000 audit[4816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd89ae058 a2=98 a3=ffffd89ae048 items=0 ppid=4696 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.596000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:28:13.597000 audit: BPF prog-id=181 op=UNLOAD Dec 12 17:28:13.597000 audit[4816]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd89ae028 a3=0 items=0 ppid=4696 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.597000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:28:13.598000 audit: BPF prog-id=182 op=LOAD Dec 12 17:28:13.598000 audit[4816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd89adf08 a2=74 a3=95 items=0 ppid=4696 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.598000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:28:13.598000 audit: BPF prog-id=182 op=UNLOAD Dec 12 17:28:13.598000 audit[4816]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4696 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.598000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:28:13.598000 audit: BPF prog-id=183 op=LOAD Dec 12 17:28:13.598000 audit[4816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd89adf38 a2=40 a3=ffffd89adf68 items=0 ppid=4696 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.598000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:28:13.599000 audit: BPF prog-id=183 op=UNLOAD Dec 12 17:28:13.599000 audit[4816]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffd89adf68 items=0 ppid=4696 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.599000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:28:13.622000 audit: BPF prog-id=184 op=LOAD Dec 12 17:28:13.622000 audit[4818]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd3f82208 a2=98 a3=ffffd3f821f8 items=0 ppid=4696 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.622000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:28:13.622000 audit: BPF prog-id=184 op=UNLOAD Dec 12 17:28:13.622000 audit[4818]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd3f821d8 a3=0 items=0 ppid=4696 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.622000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:28:13.622000 audit: BPF prog-id=185 op=LOAD Dec 12 17:28:13.622000 audit[4818]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd3f81e98 a2=74 a3=95 items=0 ppid=4696 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.622000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:28:13.624000 audit: BPF prog-id=185 op=UNLOAD Dec 12 17:28:13.624000 audit[4818]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4696 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.624000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:28:13.624000 audit: BPF prog-id=186 op=LOAD Dec 12 17:28:13.624000 audit[4818]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd3f81ef8 a2=94 a3=2 items=0 ppid=4696 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.624000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:28:13.625000 audit: BPF prog-id=186 op=UNLOAD Dec 12 17:28:13.625000 audit[4818]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4696 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.625000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:28:13.677689 systemd-networkd[1628]: cali8614af73825: Link UP Dec 12 17:28:13.680826 systemd-networkd[1628]: cali8614af73825: Gained carrier Dec 12 17:28:13.683384 (udev-worker)[4837]: Network interface NamePolicy= disabled on kernel command line. Dec 12 17:28:13.810507 containerd[1911]: 2025-12-12 17:28:12.228 [INFO][4673] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 17:28:13.810507 containerd[1911]: 2025-12-12 17:28:13.157 [INFO][4673] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--228-k8s-whisker--6b7d45d746--ssppz-eth0 whisker-6b7d45d746- calico-system c3f845e7-b0d1-412d-aad9-3771e0979bfc 947 0 2025-12-12 17:28:11 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6b7d45d746 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-17-228 whisker-6b7d45d746-ssppz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8614af73825 [] [] }} ContainerID="47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1" Namespace="calico-system" Pod="whisker-6b7d45d746-ssppz" WorkloadEndpoint="ip--172--31--17--228-k8s-whisker--6b7d45d746--ssppz-" Dec 12 17:28:13.810507 containerd[1911]: 2025-12-12 17:28:13.157 [INFO][4673] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1" Namespace="calico-system" Pod="whisker-6b7d45d746-ssppz" WorkloadEndpoint="ip--172--31--17--228-k8s-whisker--6b7d45d746--ssppz-eth0" Dec 12 17:28:13.810507 containerd[1911]: 2025-12-12 17:28:13.521 [INFO][4798] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1" HandleID="k8s-pod-network.47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1" Workload="ip--172--31--17--228-k8s-whisker--6b7d45d746--ssppz-eth0" Dec 12 17:28:13.811339 containerd[1911]: 2025-12-12 17:28:13.522 [INFO][4798] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1" HandleID="k8s-pod-network.47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1" Workload="ip--172--31--17--228-k8s-whisker--6b7d45d746--ssppz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dbc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-228", "pod":"whisker-6b7d45d746-ssppz", "timestamp":"2025-12-12 17:28:13.521144156 +0000 UTC"}, Hostname:"ip-172-31-17-228", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:28:13.811339 containerd[1911]: 2025-12-12 17:28:13.522 [INFO][4798] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:28:13.811339 containerd[1911]: 2025-12-12 17:28:13.523 [INFO][4798] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:28:13.811339 containerd[1911]: 2025-12-12 17:28:13.523 [INFO][4798] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-228' Dec 12 17:28:13.811339 containerd[1911]: 2025-12-12 17:28:13.547 [INFO][4798] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1" host="ip-172-31-17-228" Dec 12 17:28:13.811339 containerd[1911]: 2025-12-12 17:28:13.558 [INFO][4798] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-228" Dec 12 17:28:13.811339 containerd[1911]: 2025-12-12 17:28:13.568 [INFO][4798] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="ip-172-31-17-228" Dec 12 17:28:13.811339 containerd[1911]: 2025-12-12 17:28:13.572 [INFO][4798] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="ip-172-31-17-228" Dec 12 17:28:13.811339 containerd[1911]: 2025-12-12 17:28:13.577 [INFO][4798] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ip-172-31-17-228" Dec 12 17:28:13.813176 containerd[1911]: 2025-12-12 17:28:13.577 [INFO][4798] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1" host="ip-172-31-17-228" Dec 12 17:28:13.813176 containerd[1911]: 2025-12-12 17:28:13.581 [INFO][4798] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1 Dec 12 17:28:13.813176 containerd[1911]: 2025-12-12 17:28:13.591 [INFO][4798] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1" host="ip-172-31-17-228" Dec 12 17:28:13.813176 containerd[1911]: 2025-12-12 17:28:13.615 [INFO][4798] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.129/26] block=192.168.52.128/26 handle="k8s-pod-network.47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1" host="ip-172-31-17-228" Dec 12 17:28:13.813176 containerd[1911]: 2025-12-12 17:28:13.615 [INFO][4798] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.129/26] handle="k8s-pod-network.47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1" host="ip-172-31-17-228" Dec 12 17:28:13.813176 containerd[1911]: 2025-12-12 17:28:13.615 [INFO][4798] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:28:13.813176 containerd[1911]: 2025-12-12 17:28:13.615 [INFO][4798] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.129/26] IPv6=[] ContainerID="47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1" HandleID="k8s-pod-network.47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1" Workload="ip--172--31--17--228-k8s-whisker--6b7d45d746--ssppz-eth0" Dec 12 17:28:13.813824 containerd[1911]: 2025-12-12 17:28:13.627 [INFO][4673] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1" Namespace="calico-system" Pod="whisker-6b7d45d746-ssppz" WorkloadEndpoint="ip--172--31--17--228-k8s-whisker--6b7d45d746--ssppz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-whisker--6b7d45d746--ssppz-eth0", GenerateName:"whisker-6b7d45d746-", Namespace:"calico-system", SelfLink:"", UID:"c3f845e7-b0d1-412d-aad9-3771e0979bfc", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 28, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b7d45d746", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"", Pod:"whisker-6b7d45d746-ssppz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.52.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8614af73825", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:13.813824 containerd[1911]: 2025-12-12 17:28:13.628 [INFO][4673] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.129/32] ContainerID="47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1" Namespace="calico-system" Pod="whisker-6b7d45d746-ssppz" WorkloadEndpoint="ip--172--31--17--228-k8s-whisker--6b7d45d746--ssppz-eth0" Dec 12 17:28:13.814179 containerd[1911]: 2025-12-12 17:28:13.629 [INFO][4673] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8614af73825 ContainerID="47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1" Namespace="calico-system" Pod="whisker-6b7d45d746-ssppz" WorkloadEndpoint="ip--172--31--17--228-k8s-whisker--6b7d45d746--ssppz-eth0" Dec 12 17:28:13.814179 containerd[1911]: 2025-12-12 17:28:13.704 [INFO][4673] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1" Namespace="calico-system" Pod="whisker-6b7d45d746-ssppz" WorkloadEndpoint="ip--172--31--17--228-k8s-whisker--6b7d45d746--ssppz-eth0" Dec 12 17:28:13.814359 containerd[1911]: 2025-12-12 17:28:13.704 [INFO][4673] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1" Namespace="calico-system" Pod="whisker-6b7d45d746-ssppz" WorkloadEndpoint="ip--172--31--17--228-k8s-whisker--6b7d45d746--ssppz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-whisker--6b7d45d746--ssppz-eth0", GenerateName:"whisker-6b7d45d746-", Namespace:"calico-system", SelfLink:"", UID:"c3f845e7-b0d1-412d-aad9-3771e0979bfc", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 28, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b7d45d746", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1", Pod:"whisker-6b7d45d746-ssppz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.52.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8614af73825", MAC:"de:c7:b1:df:5e:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:13.814533 containerd[1911]: 2025-12-12 17:28:13.804 [INFO][4673] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1" Namespace="calico-system" Pod="whisker-6b7d45d746-ssppz" WorkloadEndpoint="ip--172--31--17--228-k8s-whisker--6b7d45d746--ssppz-eth0" Dec 12 17:28:13.904000 audit: BPF prog-id=187 op=LOAD Dec 12 17:28:13.904000 audit[4818]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd3f81eb8 a2=40 a3=ffffd3f81ee8 items=0 ppid=4696 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.904000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:28:13.904000 audit: BPF prog-id=187 op=UNLOAD Dec 12 17:28:13.904000 audit[4818]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffd3f81ee8 items=0 ppid=4696 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.904000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:28:13.931000 audit: BPF prog-id=188 op=LOAD Dec 12 17:28:13.931000 audit[4818]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd3f81ec8 a2=94 a3=4 items=0 ppid=4696 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.931000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:28:13.931000 audit: BPF prog-id=188 op=UNLOAD Dec 12 17:28:13.931000 audit[4818]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4696 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.931000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:28:13.931000 audit: BPF prog-id=189 op=LOAD Dec 12 17:28:13.931000 audit[4818]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd3f81d08 a2=94 a3=5 items=0 ppid=4696 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.931000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:28:13.933000 audit: BPF prog-id=189 op=UNLOAD Dec 12 17:28:13.933000 audit[4818]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4696 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.933000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:28:13.933000 audit: BPF prog-id=190 op=LOAD Dec 12 17:28:13.933000 audit[4818]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd3f81f38 a2=94 a3=6 items=0 ppid=4696 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.933000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:28:13.933000 audit: BPF prog-id=190 op=UNLOAD Dec 12 17:28:13.933000 audit[4818]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4696 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.933000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:28:13.933000 audit: BPF prog-id=191 op=LOAD Dec 12 17:28:13.933000 audit[4818]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd3f81708 a2=94 a3=83 items=0 ppid=4696 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.933000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:28:13.944000 audit: BPF prog-id=192 op=LOAD Dec 12 17:28:13.944000 audit[4818]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffd3f814c8 a2=94 a3=2 items=0 ppid=4696 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.944000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:28:13.944000 audit: BPF prog-id=192 op=UNLOAD Dec 12 17:28:13.944000 audit[4818]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4696 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.944000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:28:13.945000 audit: BPF prog-id=191 op=UNLOAD Dec 12 17:28:13.945000 audit[4818]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=4fd8620 a3=4fcbb00 items=0 ppid=4696 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.945000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:28:13.955072 containerd[1911]: time="2025-12-12T17:28:13.954887794Z" level=info msg="connecting to shim 47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1" address="unix:///run/containerd/s/6f73168d8aa17423dfae54fffc3e27982a533ceb7f04b9fd5708485f510aab1f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:28:13.987000 audit: BPF prog-id=193 op=LOAD Dec 12 17:28:13.987000 audit[4871]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe8af6f78 a2=98 a3=ffffe8af6f68 items=0 ppid=4696 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.987000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:28:13.987000 audit: BPF prog-id=193 op=UNLOAD Dec 12 17:28:13.987000 audit[4871]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe8af6f48 a3=0 items=0 ppid=4696 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.987000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:28:13.987000 audit: BPF prog-id=194 op=LOAD Dec 12 17:28:13.987000 audit[4871]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe8af6e28 a2=74 a3=95 items=0 ppid=4696 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.987000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:28:13.987000 audit: BPF prog-id=194 op=UNLOAD Dec 12 17:28:13.987000 audit[4871]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4696 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.987000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:28:13.987000 audit: BPF prog-id=195 op=LOAD Dec 12 17:28:13.987000 audit[4871]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe8af6e58 a2=40 a3=ffffe8af6e88 items=0 ppid=4696 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.987000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:28:13.987000 audit: BPF prog-id=195 op=UNLOAD Dec 12 17:28:13.987000 audit[4871]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffe8af6e88 items=0 ppid=4696 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:13.987000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:28:14.043698 systemd[1]: Started cri-containerd-47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1.scope - libcontainer container 47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1. Dec 12 17:28:14.085000 audit: BPF prog-id=196 op=LOAD Dec 12 17:28:14.086000 audit: BPF prog-id=197 op=LOAD Dec 12 17:28:14.086000 audit[4870]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4858 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437663266666237643964393564653534383061626531326564333363 Dec 12 17:28:14.086000 audit: BPF prog-id=197 op=UNLOAD Dec 12 17:28:14.086000 audit[4870]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4858 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437663266666237643964393564653534383061626531326564333363 Dec 12 17:28:14.087000 audit: BPF prog-id=198 op=LOAD Dec 12 17:28:14.087000 audit[4870]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4858 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437663266666237643964393564653534383061626531326564333363 Dec 12 17:28:14.088000 audit: BPF prog-id=199 op=LOAD Dec 12 17:28:14.088000 audit[4870]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4858 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437663266666237643964393564653534383061626531326564333363 Dec 12 17:28:14.088000 audit: BPF prog-id=199 op=UNLOAD Dec 12 17:28:14.088000 audit[4870]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4858 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437663266666237643964393564653534383061626531326564333363 Dec 12 17:28:14.089000 audit: BPF prog-id=198 op=UNLOAD Dec 12 17:28:14.089000 audit[4870]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4858 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437663266666237643964393564653534383061626531326564333363 Dec 12 17:28:14.089000 audit: BPF prog-id=200 op=LOAD Dec 12 17:28:14.089000 audit[4870]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4858 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437663266666237643964393564653534383061626531326564333363 Dec 12 17:28:14.174734 containerd[1911]: time="2025-12-12T17:28:14.174522523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b7d45d746-ssppz,Uid:c3f845e7-b0d1-412d-aad9-3771e0979bfc,Namespace:calico-system,Attempt:0,} returns sandbox id \"47f2ffb7d9d95de5480abe12ed33cb72ae49f54a338452aeec7381b23ead35e1\"" Dec 12 17:28:14.181608 containerd[1911]: time="2025-12-12T17:28:14.181312759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:28:14.210774 systemd-networkd[1628]: vxlan.calico: Link UP Dec 12 17:28:14.210791 systemd-networkd[1628]: vxlan.calico: Gained carrier Dec 12 17:28:14.260000 audit: BPF prog-id=201 op=LOAD Dec 12 17:28:14.260000 audit[4921]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe2cc1748 a2=98 a3=ffffe2cc1738 items=0 ppid=4696 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.260000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:28:14.260000 audit: BPF prog-id=201 op=UNLOAD Dec 12 17:28:14.260000 audit[4921]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe2cc1718 a3=0 items=0 ppid=4696 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.260000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:28:14.260000 audit: BPF prog-id=202 op=LOAD Dec 12 17:28:14.260000 audit[4921]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe2cc1428 a2=74 a3=95 items=0 ppid=4696 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.260000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:28:14.260000 audit: BPF prog-id=202 op=UNLOAD Dec 12 17:28:14.260000 audit[4921]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4696 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.260000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:28:14.260000 audit: BPF prog-id=203 op=LOAD Dec 12 17:28:14.260000 audit[4921]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe2cc1488 a2=94 a3=2 items=0 ppid=4696 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.260000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:28:14.260000 audit: BPF prog-id=203 op=UNLOAD Dec 12 17:28:14.260000 audit[4921]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4696 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.260000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:28:14.259550 (udev-worker)[4616]: Network interface NamePolicy= disabled on kernel command line. Dec 12 17:28:14.260000 audit: BPF prog-id=204 op=LOAD Dec 12 17:28:14.260000 audit[4921]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe2cc1308 a2=40 a3=ffffe2cc1338 items=0 ppid=4696 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.260000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:28:14.264000 audit: BPF prog-id=204 op=UNLOAD Dec 12 17:28:14.264000 audit[4921]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffe2cc1338 items=0 ppid=4696 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.264000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:28:14.264000 audit: BPF prog-id=205 op=LOAD Dec 12 17:28:14.264000 audit[4921]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe2cc1458 a2=94 a3=b7 items=0 ppid=4696 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.264000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:28:14.264000 audit: BPF prog-id=205 op=UNLOAD Dec 12 17:28:14.264000 audit[4921]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4696 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.264000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:28:14.267000 audit: BPF prog-id=206 op=LOAD Dec 12 17:28:14.267000 audit[4921]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe2cc0b08 a2=94 a3=2 items=0 ppid=4696 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:28:14.268000 audit: BPF prog-id=206 op=UNLOAD Dec 12 17:28:14.268000 audit[4921]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4696 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.268000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:28:14.268000 audit: BPF prog-id=207 op=LOAD Dec 12 17:28:14.268000 audit[4921]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe2cc0c98 a2=94 a3=30 items=0 ppid=4696 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.268000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:28:14.281000 audit: BPF prog-id=208 op=LOAD Dec 12 17:28:14.281000 audit[4925]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffedccd848 a2=98 a3=ffffedccd838 items=0 ppid=4696 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.281000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:28:14.281000 audit: BPF prog-id=208 op=UNLOAD Dec 12 17:28:14.281000 audit[4925]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffedccd818 a3=0 items=0 ppid=4696 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.281000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:28:14.281000 audit: BPF prog-id=209 op=LOAD Dec 12 17:28:14.281000 audit[4925]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffedccd4d8 a2=74 a3=95 items=0 ppid=4696 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.281000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:28:14.281000 audit: BPF prog-id=209 op=UNLOAD Dec 12 17:28:14.281000 audit[4925]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4696 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.281000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:28:14.282000 audit: BPF prog-id=210 op=LOAD Dec 12 17:28:14.282000 audit[4925]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffedccd538 a2=94 a3=2 items=0 ppid=4696 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.282000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:28:14.282000 audit: BPF prog-id=210 op=UNLOAD Dec 12 17:28:14.282000 audit[4925]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4696 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:14.282000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:28:14.323285 containerd[1911]: time="2025-12-12T17:28:14.323216948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5446659b-7cz6s,Uid:464adca8-6b09-4273-9180-6050c84a6f28,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:28:14.481511 containerd[1911]: time="2025-12-12T17:28:14.481342473Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:14.486981 containerd[1911]: time="2025-12-12T17:28:14.486864189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:28:14.487909 containerd[1911]: time="2025-12-12T17:28:14.487022613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:14.487986 kubelet[3528]: E1212 17:28:14.487251 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:28:14.487986 kubelet[3528]: E1212 17:28:14.487323 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:28:14.492164 kubelet[3528]: E1212 17:28:14.492017 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9f0be7f5aecb4214a695a0df6daf94fa,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8zbvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b7d45d746-ssppz_calico-system(c3f845e7-b0d1-412d-aad9-3771e0979bfc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:14.495528 containerd[1911]: time="2025-12-12T17:28:14.495468465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:28:14.761169 containerd[1911]: time="2025-12-12T17:28:14.759949810Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:14.764290 containerd[1911]: time="2025-12-12T17:28:14.764022958Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:28:14.766070 containerd[1911]: time="2025-12-12T17:28:14.765830794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:14.766972 kubelet[3528]: E1212 17:28:14.766635 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:28:14.767320 kubelet[3528]: E1212 17:28:14.766981 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:28:14.767954 kubelet[3528]: E1212 17:28:14.767602 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8zbvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b7d45d746-ssppz_calico-system(c3f845e7-b0d1-412d-aad9-3771e0979bfc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:14.769062 kubelet[3528]: E1212 17:28:14.768988 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7d45d746-ssppz" podUID="c3f845e7-b0d1-412d-aad9-3771e0979bfc" Dec 12 17:28:14.808950 systemd-networkd[1628]: cali8e50f429e53: Link UP Dec 12 17:28:14.812344 systemd-networkd[1628]: cali8e50f429e53: Gained carrier Dec 12 17:28:14.857688 containerd[1911]: 2025-12-12 17:28:14.497 [INFO][4929] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--7cz6s-eth0 calico-apiserver-7b5446659b- calico-apiserver 464adca8-6b09-4273-9180-6050c84a6f28 881 0 2025-12-12 17:27:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b5446659b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-228 calico-apiserver-7b5446659b-7cz6s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8e50f429e53 [] [] }} ContainerID="f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383" Namespace="calico-apiserver" Pod="calico-apiserver-7b5446659b-7cz6s" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--7cz6s-" Dec 12 17:28:14.857688 containerd[1911]: 2025-12-12 17:28:14.497 [INFO][4929] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383" Namespace="calico-apiserver" Pod="calico-apiserver-7b5446659b-7cz6s" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--7cz6s-eth0" Dec 12 17:28:14.857688 containerd[1911]: 2025-12-12 17:28:14.646 [INFO][4941] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383" HandleID="k8s-pod-network.f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383" Workload="ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--7cz6s-eth0" Dec 12 17:28:14.858402 containerd[1911]: 2025-12-12 17:28:14.647 [INFO][4941] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383" HandleID="k8s-pod-network.f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383" Workload="ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--7cz6s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002753e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-17-228", "pod":"calico-apiserver-7b5446659b-7cz6s", "timestamp":"2025-12-12 17:28:14.646072018 +0000 UTC"}, Hostname:"ip-172-31-17-228", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:28:14.858402 containerd[1911]: 2025-12-12 17:28:14.647 [INFO][4941] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:28:14.858402 containerd[1911]: 2025-12-12 17:28:14.647 [INFO][4941] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:28:14.858402 containerd[1911]: 2025-12-12 17:28:14.648 [INFO][4941] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-228' Dec 12 17:28:14.858402 containerd[1911]: 2025-12-12 17:28:14.670 [INFO][4941] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383" host="ip-172-31-17-228" Dec 12 17:28:14.858402 containerd[1911]: 2025-12-12 17:28:14.692 [INFO][4941] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-228" Dec 12 17:28:14.858402 containerd[1911]: 2025-12-12 17:28:14.714 [INFO][4941] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="ip-172-31-17-228" Dec 12 17:28:14.858402 containerd[1911]: 2025-12-12 17:28:14.720 [INFO][4941] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="ip-172-31-17-228" Dec 12 17:28:14.858402 containerd[1911]: 2025-12-12 17:28:14.725 [INFO][4941] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ip-172-31-17-228" Dec 12 17:28:14.862533 containerd[1911]: 2025-12-12 17:28:14.726 [INFO][4941] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383" host="ip-172-31-17-228" Dec 12 17:28:14.862533 containerd[1911]: 2025-12-12 17:28:14.733 [INFO][4941] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383 Dec 12 17:28:14.862533 containerd[1911]: 2025-12-12 17:28:14.753 [INFO][4941] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383" host="ip-172-31-17-228" Dec 12 17:28:14.862533 containerd[1911]: 2025-12-12 17:28:14.777 [INFO][4941] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.130/26] block=192.168.52.128/26 handle="k8s-pod-network.f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383" host="ip-172-31-17-228" Dec 12 17:28:14.862533 containerd[1911]: 2025-12-12 17:28:14.778 [INFO][4941] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.130/26] handle="k8s-pod-network.f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383" host="ip-172-31-17-228" Dec 12 17:28:14.862533 containerd[1911]: 2025-12-12 17:28:14.778 [INFO][4941] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:28:14.862533 containerd[1911]: 2025-12-12 17:28:14.779 [INFO][4941] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.130/26] IPv6=[] ContainerID="f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383" HandleID="k8s-pod-network.f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383" Workload="ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--7cz6s-eth0" Dec 12 17:28:14.864413 containerd[1911]: 2025-12-12 17:28:14.788 [INFO][4929] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383" Namespace="calico-apiserver" Pod="calico-apiserver-7b5446659b-7cz6s" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--7cz6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--7cz6s-eth0", GenerateName:"calico-apiserver-7b5446659b-", Namespace:"calico-apiserver", SelfLink:"", UID:"464adca8-6b09-4273-9180-6050c84a6f28", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b5446659b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"", Pod:"calico-apiserver-7b5446659b-7cz6s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e50f429e53", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:14.864568 containerd[1911]: 2025-12-12 17:28:14.788 [INFO][4929] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.130/32] ContainerID="f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383" Namespace="calico-apiserver" Pod="calico-apiserver-7b5446659b-7cz6s" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--7cz6s-eth0" Dec 12 17:28:14.864568 containerd[1911]: 2025-12-12 17:28:14.789 [INFO][4929] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e50f429e53 ContainerID="f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383" Namespace="calico-apiserver" Pod="calico-apiserver-7b5446659b-7cz6s" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--7cz6s-eth0" Dec 12 17:28:14.864568 containerd[1911]: 2025-12-12 17:28:14.813 [INFO][4929] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383" Namespace="calico-apiserver" Pod="calico-apiserver-7b5446659b-7cz6s" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--7cz6s-eth0" Dec 12 17:28:14.864813 containerd[1911]: 2025-12-12 17:28:14.816 [INFO][4929] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383" Namespace="calico-apiserver" Pod="calico-apiserver-7b5446659b-7cz6s" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--7cz6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--7cz6s-eth0", GenerateName:"calico-apiserver-7b5446659b-", Namespace:"calico-apiserver", SelfLink:"", UID:"464adca8-6b09-4273-9180-6050c84a6f28", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b5446659b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383", Pod:"calico-apiserver-7b5446659b-7cz6s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e50f429e53", MAC:"3a:02:db:cb:b1:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:14.864939 containerd[1911]: 2025-12-12 17:28:14.849 [INFO][4929] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383" Namespace="calico-apiserver" Pod="calico-apiserver-7b5446659b-7cz6s" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--7cz6s-eth0" Dec 12 17:28:14.954364 containerd[1911]: time="2025-12-12T17:28:14.954272207Z" level=info msg="connecting to shim f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383" address="unix:///run/containerd/s/2ed817e4ea7d77e23e1fd6d526b7c08a31867b6a96ceb8700a8dea5e52ecba80" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:28:15.023000 audit: BPF prog-id=211 op=LOAD Dec 12 17:28:15.023000 audit[4925]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffedccd4f8 a2=40 a3=ffffedccd528 items=0 ppid=4696 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.023000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:28:15.024000 audit: BPF prog-id=211 op=UNLOAD Dec 12 17:28:15.024000 audit[4925]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffedccd528 items=0 ppid=4696 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.024000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:28:15.031483 systemd[1]: Started cri-containerd-f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383.scope - libcontainer container f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383. Dec 12 17:28:15.066000 audit: BPF prog-id=212 op=LOAD Dec 12 17:28:15.066000 audit[4925]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffedccd508 a2=94 a3=4 items=0 ppid=4696 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.066000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:28:15.066000 audit: BPF prog-id=212 op=UNLOAD Dec 12 17:28:15.066000 audit[4925]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4696 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.066000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:28:15.067000 audit: BPF prog-id=213 op=LOAD Dec 12 17:28:15.067000 audit[4925]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffedccd348 a2=94 a3=5 items=0 ppid=4696 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.067000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:28:15.067000 audit: BPF prog-id=213 op=UNLOAD Dec 12 17:28:15.067000 audit[4925]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4696 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.067000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:28:15.067000 audit: BPF prog-id=214 op=LOAD Dec 12 17:28:15.067000 audit[4925]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffedccd578 a2=94 a3=6 items=0 ppid=4696 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.067000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:28:15.067000 audit: BPF prog-id=214 op=UNLOAD Dec 12 17:28:15.067000 audit[4925]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4696 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.067000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:28:15.068000 audit: BPF prog-id=215 op=LOAD Dec 12 17:28:15.068000 audit[4925]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffedcccd48 a2=94 a3=83 items=0 ppid=4696 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.068000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:28:15.069000 audit: BPF prog-id=216 op=LOAD Dec 12 17:28:15.069000 audit[4925]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffedcccb08 a2=94 a3=2 items=0 ppid=4696 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:28:15.069000 audit: BPF prog-id=216 op=UNLOAD Dec 12 17:28:15.069000 audit[4925]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4696 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:28:15.070000 audit: BPF prog-id=215 op=UNLOAD Dec 12 17:28:15.070000 audit[4925]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=3f7d1620 a3=3f7c4b00 items=0 ppid=4696 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.070000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:28:15.086000 audit: BPF prog-id=207 op=UNLOAD Dec 12 17:28:15.086000 audit[4696]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000dce340 a2=0 a3=0 items=0 ppid=4684 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.086000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 12 17:28:15.117000 audit: BPF prog-id=217 op=LOAD Dec 12 17:28:15.118000 audit: BPF prog-id=218 op=LOAD Dec 12 17:28:15.118000 audit[4976]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=4964 pid=4976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639613835616336393131636562343039386662383765633737353862 Dec 12 17:28:15.119000 audit: BPF prog-id=218 op=UNLOAD Dec 12 17:28:15.119000 audit[4976]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4964 pid=4976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639613835616336393131636562343039386662383765633737353862 Dec 12 17:28:15.120000 audit: BPF prog-id=219 op=LOAD Dec 12 17:28:15.120000 audit[4976]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=4964 pid=4976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.120000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639613835616336393131636562343039386662383765633737353862 Dec 12 17:28:15.121000 audit: BPF prog-id=220 op=LOAD Dec 12 17:28:15.121000 audit[4976]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=4964 pid=4976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639613835616336393131636562343039386662383765633737353862 Dec 12 17:28:15.121000 audit: BPF prog-id=220 op=UNLOAD Dec 12 17:28:15.121000 audit[4976]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4964 pid=4976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639613835616336393131636562343039386662383765633737353862 Dec 12 17:28:15.121000 audit: BPF prog-id=219 op=UNLOAD Dec 12 17:28:15.121000 audit[4976]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4964 pid=4976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639613835616336393131636562343039386662383765633737353862 Dec 12 17:28:15.121000 audit: BPF prog-id=221 op=LOAD Dec 12 17:28:15.121000 audit[4976]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=4964 pid=4976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639613835616336393131636562343039386662383765633737353862 Dec 12 17:28:15.213845 containerd[1911]: time="2025-12-12T17:28:15.213782601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5446659b-7cz6s,Uid:464adca8-6b09-4273-9180-6050c84a6f28,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f9a85ac6911ceb4098fb87ec7758b28a675dfb6d9afe0da56ec61b6453f65383\"" Dec 12 17:28:15.229868 containerd[1911]: time="2025-12-12T17:28:15.229799901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:28:15.269000 audit[5023]: NETFILTER_CFG table=nat:121 family=2 entries=15 op=nft_register_chain pid=5023 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:28:15.269000 audit[5023]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffd2ce9d30 a2=0 a3=ffff85724fa8 items=0 ppid=4696 pid=5023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.269000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:28:15.277000 audit[5025]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=5025 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:28:15.277000 audit[5025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffe1e3fdd0 a2=0 a3=ffff89655fa8 items=0 ppid=4696 pid=5025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.277000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:28:15.292000 audit[5024]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=5024 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:28:15.292000 audit[5024]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffea404fb0 a2=0 a3=ffff804a1fa8 items=0 ppid=4696 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.292000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:28:15.296000 audit[5026]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=5026 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:28:15.296000 audit[5026]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffee7031b0 a2=0 a3=ffffb8ebdfa8 items=0 ppid=4696 pid=5026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.296000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:28:15.371000 audit[5037]: NETFILTER_CFG table=filter:125 family=2 entries=50 op=nft_register_chain pid=5037 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:28:15.373599 kernel: kauditd_printk_skb: 247 callbacks suppressed Dec 12 17:28:15.373771 kernel: audit: type=1325 audit(1765560495.371:678): table=filter:125 family=2 entries=50 op=nft_register_chain pid=5037 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:28:15.371000 audit[5037]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28208 a0=3 a1=fffffa14bdf0 a2=0 a3=ffffa41a0fa8 items=0 ppid=4696 pid=5037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.384209 kernel: audit: type=1300 audit(1765560495.371:678): arch=c00000b7 syscall=211 success=yes exit=28208 a0=3 a1=fffffa14bdf0 a2=0 a3=ffffa41a0fa8 items=0 ppid=4696 pid=5037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.371000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:28:15.388265 kernel: audit: type=1327 audit(1765560495.371:678): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:28:15.419903 systemd-networkd[1628]: cali8614af73825: Gained IPv6LL Dec 12 17:28:15.520019 containerd[1911]: time="2025-12-12T17:28:15.519940954Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:15.522319 containerd[1911]: time="2025-12-12T17:28:15.522248326Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:28:15.522445 containerd[1911]: time="2025-12-12T17:28:15.522389014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:15.522882 kubelet[3528]: E1212 17:28:15.522811 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:15.523430 kubelet[3528]: E1212 17:28:15.522884 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:15.523430 kubelet[3528]: E1212 17:28:15.523095 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jc92w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b5446659b-7cz6s_calico-apiserver(464adca8-6b09-4273-9180-6050c84a6f28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:15.525168 kubelet[3528]: E1212 17:28:15.525031 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5446659b-7cz6s" podUID="464adca8-6b09-4273-9180-6050c84a6f28" Dec 12 17:28:15.720889 kubelet[3528]: E1212 17:28:15.720709 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5446659b-7cz6s" podUID="464adca8-6b09-4273-9180-6050c84a6f28" Dec 12 17:28:15.726735 kubelet[3528]: E1212 17:28:15.724848 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7d45d746-ssppz" podUID="c3f845e7-b0d1-412d-aad9-3771e0979bfc" Dec 12 17:28:15.931942 systemd-networkd[1628]: vxlan.calico: Gained IPv6LL Dec 12 17:28:16.187947 systemd-networkd[1628]: cali8e50f429e53: Gained IPv6LL Dec 12 17:28:16.245000 audit[5040]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=5040 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:16.245000 audit[5040]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffebd8ed90 a2=0 a3=1 items=0 ppid=3675 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:16.257215 kernel: audit: type=1325 audit(1765560496.245:679): table=filter:126 family=2 entries=20 op=nft_register_rule pid=5040 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:16.257331 kernel: audit: type=1300 audit(1765560496.245:679): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffebd8ed90 a2=0 a3=1 items=0 ppid=3675 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:16.257386 kernel: audit: type=1327 audit(1765560496.245:679): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:16.245000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:16.262000 audit[5040]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=5040 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:16.262000 audit[5040]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffebd8ed90 a2=0 a3=1 items=0 ppid=3675 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:16.272686 kernel: audit: type=1325 audit(1765560496.262:680): table=nat:127 family=2 entries=14 op=nft_register_rule pid=5040 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:16.272823 kernel: audit: type=1300 audit(1765560496.262:680): arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffebd8ed90 a2=0 a3=1 items=0 ppid=3675 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:16.272872 kernel: audit: type=1327 audit(1765560496.262:680): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:16.262000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:16.291000 audit[5042]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=5042 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:16.291000 audit[5042]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffec690b50 a2=0 a3=1 items=0 ppid=3675 pid=5042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:16.291000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:16.296789 kernel: audit: type=1325 audit(1765560496.291:681): table=filter:128 family=2 entries=20 op=nft_register_rule pid=5042 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:16.297000 audit[5042]: NETFILTER_CFG table=nat:129 family=2 entries=14 op=nft_register_rule pid=5042 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:16.297000 audit[5042]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffec690b50 a2=0 a3=1 items=0 ppid=3675 pid=5042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:16.297000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:16.323845 containerd[1911]: time="2025-12-12T17:28:16.323762206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5phxm,Uid:5ad89cf6-178c-4c89-9906-56d3d4e0dba0,Namespace:calico-system,Attempt:0,}" Dec 12 17:28:16.325535 containerd[1911]: time="2025-12-12T17:28:16.325137382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fxswz,Uid:4993f81d-df62-4e56-b3d3-f820e3c156d6,Namespace:calico-system,Attempt:0,}" Dec 12 17:28:16.716029 systemd-networkd[1628]: cali0fb89285d33: Link UP Dec 12 17:28:16.720468 systemd-networkd[1628]: cali0fb89285d33: Gained carrier Dec 12 17:28:16.739271 kubelet[3528]: E1212 17:28:16.738915 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5446659b-7cz6s" podUID="464adca8-6b09-4273-9180-6050c84a6f28" Dec 12 17:28:16.794846 containerd[1911]: 2025-12-12 17:28:16.512 [INFO][5045] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--228-k8s-goldmane--666569f655--fxswz-eth0 goldmane-666569f655- calico-system 4993f81d-df62-4e56-b3d3-f820e3c156d6 880 0 2025-12-12 17:27:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-17-228 goldmane-666569f655-fxswz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0fb89285d33 [] [] }} ContainerID="42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0" Namespace="calico-system" Pod="goldmane-666569f655-fxswz" WorkloadEndpoint="ip--172--31--17--228-k8s-goldmane--666569f655--fxswz-" Dec 12 17:28:16.794846 containerd[1911]: 2025-12-12 17:28:16.515 [INFO][5045] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0" Namespace="calico-system" Pod="goldmane-666569f655-fxswz" WorkloadEndpoint="ip--172--31--17--228-k8s-goldmane--666569f655--fxswz-eth0" Dec 12 17:28:16.794846 containerd[1911]: 2025-12-12 17:28:16.609 [INFO][5066] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0" HandleID="k8s-pod-network.42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0" Workload="ip--172--31--17--228-k8s-goldmane--666569f655--fxswz-eth0" Dec 12 17:28:16.795179 containerd[1911]: 2025-12-12 17:28:16.610 [INFO][5066] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0" HandleID="k8s-pod-network.42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0" Workload="ip--172--31--17--228-k8s-goldmane--666569f655--fxswz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b050), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-228", "pod":"goldmane-666569f655-fxswz", "timestamp":"2025-12-12 17:28:16.609540863 +0000 UTC"}, Hostname:"ip-172-31-17-228", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:28:16.795179 containerd[1911]: 2025-12-12 17:28:16.610 [INFO][5066] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:28:16.795179 containerd[1911]: 2025-12-12 17:28:16.610 [INFO][5066] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:28:16.795179 containerd[1911]: 2025-12-12 17:28:16.610 [INFO][5066] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-228' Dec 12 17:28:16.795179 containerd[1911]: 2025-12-12 17:28:16.632 [INFO][5066] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0" host="ip-172-31-17-228" Dec 12 17:28:16.795179 containerd[1911]: 2025-12-12 17:28:16.642 [INFO][5066] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-228" Dec 12 17:28:16.795179 containerd[1911]: 2025-12-12 17:28:16.652 [INFO][5066] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="ip-172-31-17-228" Dec 12 17:28:16.795179 containerd[1911]: 2025-12-12 17:28:16.657 [INFO][5066] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="ip-172-31-17-228" Dec 12 17:28:16.795179 containerd[1911]: 2025-12-12 17:28:16.663 [INFO][5066] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ip-172-31-17-228" Dec 12 17:28:16.796931 containerd[1911]: 2025-12-12 17:28:16.663 [INFO][5066] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0" host="ip-172-31-17-228" Dec 12 17:28:16.796931 containerd[1911]: 2025-12-12 17:28:16.668 [INFO][5066] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0 Dec 12 17:28:16.796931 containerd[1911]: 2025-12-12 17:28:16.678 [INFO][5066] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0" host="ip-172-31-17-228" Dec 12 17:28:16.796931 containerd[1911]: 2025-12-12 17:28:16.696 [INFO][5066] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.131/26] block=192.168.52.128/26 handle="k8s-pod-network.42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0" host="ip-172-31-17-228" Dec 12 17:28:16.796931 containerd[1911]: 2025-12-12 17:28:16.697 [INFO][5066] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.131/26] handle="k8s-pod-network.42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0" host="ip-172-31-17-228" Dec 12 17:28:16.796931 containerd[1911]: 2025-12-12 17:28:16.697 [INFO][5066] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:28:16.796931 containerd[1911]: 2025-12-12 17:28:16.697 [INFO][5066] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.131/26] IPv6=[] ContainerID="42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0" HandleID="k8s-pod-network.42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0" Workload="ip--172--31--17--228-k8s-goldmane--666569f655--fxswz-eth0" Dec 12 17:28:16.798907 containerd[1911]: 2025-12-12 17:28:16.704 [INFO][5045] cni-plugin/k8s.go 418: Populated endpoint ContainerID="42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0" Namespace="calico-system" Pod="goldmane-666569f655-fxswz" WorkloadEndpoint="ip--172--31--17--228-k8s-goldmane--666569f655--fxswz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-goldmane--666569f655--fxswz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"4993f81d-df62-4e56-b3d3-f820e3c156d6", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"", Pod:"goldmane-666569f655-fxswz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.52.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0fb89285d33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:16.798907 containerd[1911]: 2025-12-12 17:28:16.705 [INFO][5045] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.131/32] ContainerID="42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0" Namespace="calico-system" Pod="goldmane-666569f655-fxswz" WorkloadEndpoint="ip--172--31--17--228-k8s-goldmane--666569f655--fxswz-eth0" Dec 12 17:28:16.800573 containerd[1911]: 2025-12-12 17:28:16.705 [INFO][5045] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0fb89285d33 ContainerID="42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0" Namespace="calico-system" Pod="goldmane-666569f655-fxswz" WorkloadEndpoint="ip--172--31--17--228-k8s-goldmane--666569f655--fxswz-eth0" Dec 12 17:28:16.800573 containerd[1911]: 2025-12-12 17:28:16.726 [INFO][5045] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0" Namespace="calico-system" Pod="goldmane-666569f655-fxswz" WorkloadEndpoint="ip--172--31--17--228-k8s-goldmane--666569f655--fxswz-eth0" Dec 12 17:28:16.801955 containerd[1911]: 2025-12-12 17:28:16.734 [INFO][5045] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0" Namespace="calico-system" Pod="goldmane-666569f655-fxswz" WorkloadEndpoint="ip--172--31--17--228-k8s-goldmane--666569f655--fxswz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-goldmane--666569f655--fxswz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"4993f81d-df62-4e56-b3d3-f820e3c156d6", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0", Pod:"goldmane-666569f655-fxswz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.52.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0fb89285d33", MAC:"a6:d5:d5:60:2b:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:16.802323 containerd[1911]: 2025-12-12 17:28:16.784 [INFO][5045] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0" Namespace="calico-system" Pod="goldmane-666569f655-fxswz" WorkloadEndpoint="ip--172--31--17--228-k8s-goldmane--666569f655--fxswz-eth0" Dec 12 17:28:16.874376 containerd[1911]: time="2025-12-12T17:28:16.874041961Z" level=info msg="connecting to shim 42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0" address="unix:///run/containerd/s/601bfac8f3c59e76a9292aca31ed9dcbc5a529b7b7623873e3dca7d6c50f79d4" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:28:16.928508 systemd-networkd[1628]: cali58afbb977a8: Link UP Dec 12 17:28:16.935376 systemd-networkd[1628]: cali58afbb977a8: Gained carrier Dec 12 17:28:16.994376 containerd[1911]: 2025-12-12 17:28:16.506 [INFO][5043] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--228-k8s-csi--node--driver--5phxm-eth0 csi-node-driver- calico-system 5ad89cf6-178c-4c89-9906-56d3d4e0dba0 776 0 2025-12-12 17:27:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-17-228 csi-node-driver-5phxm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali58afbb977a8 [] [] }} ContainerID="375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922" Namespace="calico-system" Pod="csi-node-driver-5phxm" WorkloadEndpoint="ip--172--31--17--228-k8s-csi--node--driver--5phxm-" Dec 12 17:28:16.994376 containerd[1911]: 2025-12-12 17:28:16.507 [INFO][5043] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922" Namespace="calico-system" Pod="csi-node-driver-5phxm" WorkloadEndpoint="ip--172--31--17--228-k8s-csi--node--driver--5phxm-eth0" Dec 12 17:28:16.994376 containerd[1911]: 2025-12-12 17:28:16.621 [INFO][5068] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922" HandleID="k8s-pod-network.375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922" Workload="ip--172--31--17--228-k8s-csi--node--driver--5phxm-eth0" Dec 12 17:28:16.994727 containerd[1911]: 2025-12-12 17:28:16.621 [INFO][5068] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922" HandleID="k8s-pod-network.375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922" Workload="ip--172--31--17--228-k8s-csi--node--driver--5phxm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d8e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-228", "pod":"csi-node-driver-5phxm", "timestamp":"2025-12-12 17:28:16.6214962 +0000 UTC"}, Hostname:"ip-172-31-17-228", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:28:16.994727 containerd[1911]: 2025-12-12 17:28:16.621 [INFO][5068] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:28:16.994727 containerd[1911]: 2025-12-12 17:28:16.697 [INFO][5068] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:28:16.994727 containerd[1911]: 2025-12-12 17:28:16.697 [INFO][5068] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-228' Dec 12 17:28:16.994727 containerd[1911]: 2025-12-12 17:28:16.774 [INFO][5068] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922" host="ip-172-31-17-228" Dec 12 17:28:16.994727 containerd[1911]: 2025-12-12 17:28:16.801 [INFO][5068] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-228" Dec 12 17:28:16.994727 containerd[1911]: 2025-12-12 17:28:16.820 [INFO][5068] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="ip-172-31-17-228" Dec 12 17:28:16.994727 containerd[1911]: 2025-12-12 17:28:16.826 [INFO][5068] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="ip-172-31-17-228" Dec 12 17:28:16.994727 containerd[1911]: 2025-12-12 17:28:16.843 [INFO][5068] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ip-172-31-17-228" Dec 12 17:28:16.995185 containerd[1911]: 2025-12-12 17:28:16.843 [INFO][5068] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922" host="ip-172-31-17-228" Dec 12 17:28:16.995185 containerd[1911]: 2025-12-12 17:28:16.860 [INFO][5068] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922 Dec 12 17:28:16.995185 containerd[1911]: 2025-12-12 17:28:16.874 [INFO][5068] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922" host="ip-172-31-17-228" Dec 12 17:28:16.995185 containerd[1911]: 2025-12-12 17:28:16.899 [INFO][5068] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.132/26] block=192.168.52.128/26 handle="k8s-pod-network.375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922" host="ip-172-31-17-228" Dec 12 17:28:16.995185 containerd[1911]: 2025-12-12 17:28:16.904 [INFO][5068] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.132/26] handle="k8s-pod-network.375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922" host="ip-172-31-17-228" Dec 12 17:28:16.995185 containerd[1911]: 2025-12-12 17:28:16.904 [INFO][5068] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:28:16.995185 containerd[1911]: 2025-12-12 17:28:16.904 [INFO][5068] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.132/26] IPv6=[] ContainerID="375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922" HandleID="k8s-pod-network.375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922" Workload="ip--172--31--17--228-k8s-csi--node--driver--5phxm-eth0" Dec 12 17:28:16.995492 containerd[1911]: 2025-12-12 17:28:16.916 [INFO][5043] cni-plugin/k8s.go 418: Populated endpoint ContainerID="375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922" Namespace="calico-system" Pod="csi-node-driver-5phxm" WorkloadEndpoint="ip--172--31--17--228-k8s-csi--node--driver--5phxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-csi--node--driver--5phxm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5ad89cf6-178c-4c89-9906-56d3d4e0dba0", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"", Pod:"csi-node-driver-5phxm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali58afbb977a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:16.995620 containerd[1911]: 2025-12-12 17:28:16.916 [INFO][5043] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.132/32] ContainerID="375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922" Namespace="calico-system" Pod="csi-node-driver-5phxm" WorkloadEndpoint="ip--172--31--17--228-k8s-csi--node--driver--5phxm-eth0" Dec 12 17:28:16.995620 containerd[1911]: 2025-12-12 17:28:16.916 [INFO][5043] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58afbb977a8 ContainerID="375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922" Namespace="calico-system" Pod="csi-node-driver-5phxm" WorkloadEndpoint="ip--172--31--17--228-k8s-csi--node--driver--5phxm-eth0" Dec 12 17:28:16.995620 containerd[1911]: 2025-12-12 17:28:16.937 [INFO][5043] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922" Namespace="calico-system" Pod="csi-node-driver-5phxm" WorkloadEndpoint="ip--172--31--17--228-k8s-csi--node--driver--5phxm-eth0" Dec 12 17:28:16.995803 containerd[1911]: 2025-12-12 17:28:16.938 [INFO][5043] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922" Namespace="calico-system" Pod="csi-node-driver-5phxm" WorkloadEndpoint="ip--172--31--17--228-k8s-csi--node--driver--5phxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-csi--node--driver--5phxm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5ad89cf6-178c-4c89-9906-56d3d4e0dba0", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922", Pod:"csi-node-driver-5phxm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali58afbb977a8", MAC:"6e:21:ec:d1:33:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:16.995932 containerd[1911]: 2025-12-12 17:28:16.978 [INFO][5043] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922" Namespace="calico-system" Pod="csi-node-driver-5phxm" WorkloadEndpoint="ip--172--31--17--228-k8s-csi--node--driver--5phxm-eth0" Dec 12 17:28:17.024635 systemd[1]: Started cri-containerd-42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0.scope - libcontainer container 42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0. Dec 12 17:28:17.038000 audit[5127]: NETFILTER_CFG table=filter:130 family=2 entries=54 op=nft_register_chain pid=5127 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:28:17.038000 audit[5127]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29220 a0=3 a1=fffffa33f470 a2=0 a3=ffff99670fa8 items=0 ppid=4696 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.038000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:28:17.104020 containerd[1911]: time="2025-12-12T17:28:17.103932898Z" level=info msg="connecting to shim 375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922" address="unix:///run/containerd/s/a41710113b36804f7e7bc5f58dac581a02d8454e456a6cb04cbcd369e12b9581" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:28:17.163000 audit: BPF prog-id=222 op=LOAD Dec 12 17:28:17.166000 audit: BPF prog-id=223 op=LOAD Dec 12 17:28:17.166000 audit[5108]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400017e180 a2=98 a3=0 items=0 ppid=5097 pid=5108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432613232353733616565646432393636626366303535656261396439 Dec 12 17:28:17.168000 audit: BPF prog-id=223 op=UNLOAD Dec 12 17:28:17.168000 audit[5108]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5097 pid=5108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432613232353733616565646432393636626366303535656261396439 Dec 12 17:28:17.168000 audit: BPF prog-id=224 op=LOAD Dec 12 17:28:17.168000 audit[5108]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400017e3e8 a2=98 a3=0 items=0 ppid=5097 pid=5108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432613232353733616565646432393636626366303535656261396439 Dec 12 17:28:17.169000 audit: BPF prog-id=225 op=LOAD Dec 12 17:28:17.169000 audit[5108]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=400017e168 a2=98 a3=0 items=0 ppid=5097 pid=5108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432613232353733616565646432393636626366303535656261396439 Dec 12 17:28:17.169000 audit: BPF prog-id=225 op=UNLOAD Dec 12 17:28:17.169000 audit[5108]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5097 pid=5108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432613232353733616565646432393636626366303535656261396439 Dec 12 17:28:17.169000 audit: BPF prog-id=224 op=UNLOAD Dec 12 17:28:17.169000 audit[5108]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5097 pid=5108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432613232353733616565646432393636626366303535656261396439 Dec 12 17:28:17.170000 audit: BPF prog-id=226 op=LOAD Dec 12 17:28:17.170000 audit[5108]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400017e648 a2=98 a3=0 items=0 ppid=5097 pid=5108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432613232353733616565646432393636626366303535656261396439 Dec 12 17:28:17.201140 systemd[1]: Started cri-containerd-375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922.scope - libcontainer container 375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922. Dec 12 17:28:17.264000 audit: BPF prog-id=227 op=LOAD Dec 12 17:28:17.267000 audit: BPF prog-id=228 op=LOAD Dec 12 17:28:17.267000 audit[5156]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=5144 pid=5156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.267000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337356661646166333630613264313336356236343330363439336139 Dec 12 17:28:17.267000 audit: BPF prog-id=228 op=UNLOAD Dec 12 17:28:17.267000 audit[5156]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5144 pid=5156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.267000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337356661646166333630613264313336356236343330363439336139 Dec 12 17:28:17.268000 audit: BPF prog-id=229 op=LOAD Dec 12 17:28:17.268000 audit[5156]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=5144 pid=5156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337356661646166333630613264313336356236343330363439336139 Dec 12 17:28:17.268000 audit: BPF prog-id=230 op=LOAD Dec 12 17:28:17.268000 audit[5156]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=5144 pid=5156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337356661646166333630613264313336356236343330363439336139 Dec 12 17:28:17.268000 audit: BPF prog-id=230 op=UNLOAD Dec 12 17:28:17.268000 audit[5156]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5144 pid=5156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337356661646166333630613264313336356236343330363439336139 Dec 12 17:28:17.268000 audit: BPF prog-id=229 op=UNLOAD Dec 12 17:28:17.268000 audit[5156]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5144 pid=5156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337356661646166333630613264313336356236343330363439336139 Dec 12 17:28:17.269000 audit: BPF prog-id=231 op=LOAD Dec 12 17:28:17.269000 audit[5156]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=5144 pid=5156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337356661646166333630613264313336356236343330363439336139 Dec 12 17:28:17.262000 audit[5177]: NETFILTER_CFG table=filter:131 family=2 entries=40 op=nft_register_chain pid=5177 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:28:17.262000 audit[5177]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20748 a0=3 a1=ffffec64ffd0 a2=0 a3=ffff91529fa8 items=0 ppid=4696 pid=5177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.262000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:28:17.319125 containerd[1911]: time="2025-12-12T17:28:17.318921035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5phxm,Uid:5ad89cf6-178c-4c89-9906-56d3d4e0dba0,Namespace:calico-system,Attempt:0,} returns sandbox id \"375fadaf360a2d1365b64306493a9067eedb3b71c6f29f3dc2770555f3a65922\"" Dec 12 17:28:17.327246 containerd[1911]: time="2025-12-12T17:28:17.327063839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:28:17.348780 containerd[1911]: time="2025-12-12T17:28:17.348465575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6755b5785f-427k6,Uid:b11dd849-e38b-40e1-a2d3-0061a9f777d6,Namespace:calico-system,Attempt:0,}" Dec 12 17:28:17.478784 containerd[1911]: time="2025-12-12T17:28:17.478643376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fxswz,Uid:4993f81d-df62-4e56-b3d3-f820e3c156d6,Namespace:calico-system,Attempt:0,} returns sandbox id \"42a22573aeedd2966bcf055eba9d913215cdd277769591e6cb7f0da96ea2ece0\"" Dec 12 17:28:17.650895 systemd-networkd[1628]: cali6b52c6fba2b: Link UP Dec 12 17:28:17.654060 systemd-networkd[1628]: cali6b52c6fba2b: Gained carrier Dec 12 17:28:17.660157 containerd[1911]: time="2025-12-12T17:28:17.659591581Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:17.665633 containerd[1911]: time="2025-12-12T17:28:17.664743637Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:28:17.666393 containerd[1911]: time="2025-12-12T17:28:17.665076133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:17.666499 kubelet[3528]: E1212 17:28:17.666380 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:28:17.666499 kubelet[3528]: E1212 17:28:17.666451 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:28:17.677383 kubelet[3528]: E1212 17:28:17.666845 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzhvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5phxm_calico-system(5ad89cf6-178c-4c89-9906-56d3d4e0dba0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:17.677600 containerd[1911]: time="2025-12-12T17:28:17.668947093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:28:17.696530 containerd[1911]: 2025-12-12 17:28:17.518 [INFO][5185] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--228-k8s-calico--kube--controllers--6755b5785f--427k6-eth0 calico-kube-controllers-6755b5785f- calico-system b11dd849-e38b-40e1-a2d3-0061a9f777d6 879 0 2025-12-12 17:27:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6755b5785f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-17-228 calico-kube-controllers-6755b5785f-427k6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6b52c6fba2b [] [] }} ContainerID="569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d" Namespace="calico-system" Pod="calico-kube-controllers-6755b5785f-427k6" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--kube--controllers--6755b5785f--427k6-" Dec 12 17:28:17.696530 containerd[1911]: 2025-12-12 17:28:17.519 [INFO][5185] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d" Namespace="calico-system" Pod="calico-kube-controllers-6755b5785f-427k6" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--kube--controllers--6755b5785f--427k6-eth0" Dec 12 17:28:17.696530 containerd[1911]: 2025-12-12 17:28:17.570 [INFO][5204] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d" HandleID="k8s-pod-network.569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d" Workload="ip--172--31--17--228-k8s-calico--kube--controllers--6755b5785f--427k6-eth0" Dec 12 17:28:17.696923 containerd[1911]: 2025-12-12 17:28:17.570 [INFO][5204] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d" HandleID="k8s-pod-network.569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d" Workload="ip--172--31--17--228-k8s-calico--kube--controllers--6755b5785f--427k6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c1a30), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-228", "pod":"calico-kube-controllers-6755b5785f-427k6", "timestamp":"2025-12-12 17:28:17.570714264 +0000 UTC"}, Hostname:"ip-172-31-17-228", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:28:17.696923 containerd[1911]: 2025-12-12 17:28:17.571 [INFO][5204] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:28:17.696923 containerd[1911]: 2025-12-12 17:28:17.571 [INFO][5204] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:28:17.696923 containerd[1911]: 2025-12-12 17:28:17.571 [INFO][5204] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-228' Dec 12 17:28:17.696923 containerd[1911]: 2025-12-12 17:28:17.587 [INFO][5204] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d" host="ip-172-31-17-228" Dec 12 17:28:17.696923 containerd[1911]: 2025-12-12 17:28:17.596 [INFO][5204] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-228" Dec 12 17:28:17.696923 containerd[1911]: 2025-12-12 17:28:17.607 [INFO][5204] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="ip-172-31-17-228" Dec 12 17:28:17.696923 containerd[1911]: 2025-12-12 17:28:17.610 [INFO][5204] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="ip-172-31-17-228" Dec 12 17:28:17.696923 containerd[1911]: 2025-12-12 17:28:17.615 [INFO][5204] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ip-172-31-17-228" Dec 12 17:28:17.698341 containerd[1911]: 2025-12-12 17:28:17.615 [INFO][5204] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d" host="ip-172-31-17-228" Dec 12 17:28:17.698341 containerd[1911]: 2025-12-12 17:28:17.617 [INFO][5204] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d Dec 12 17:28:17.698341 containerd[1911]: 2025-12-12 17:28:17.625 [INFO][5204] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d" host="ip-172-31-17-228" Dec 12 17:28:17.698341 containerd[1911]: 2025-12-12 17:28:17.640 [INFO][5204] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.133/26] block=192.168.52.128/26 handle="k8s-pod-network.569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d" host="ip-172-31-17-228" Dec 12 17:28:17.698341 containerd[1911]: 2025-12-12 17:28:17.640 [INFO][5204] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.133/26] handle="k8s-pod-network.569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d" host="ip-172-31-17-228" Dec 12 17:28:17.698341 containerd[1911]: 2025-12-12 17:28:17.640 [INFO][5204] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:28:17.698341 containerd[1911]: 2025-12-12 17:28:17.640 [INFO][5204] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.133/26] IPv6=[] ContainerID="569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d" HandleID="k8s-pod-network.569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d" Workload="ip--172--31--17--228-k8s-calico--kube--controllers--6755b5785f--427k6-eth0" Dec 12 17:28:17.698847 containerd[1911]: 2025-12-12 17:28:17.644 [INFO][5185] cni-plugin/k8s.go 418: Populated endpoint ContainerID="569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d" Namespace="calico-system" Pod="calico-kube-controllers-6755b5785f-427k6" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--kube--controllers--6755b5785f--427k6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-calico--kube--controllers--6755b5785f--427k6-eth0", GenerateName:"calico-kube-controllers-6755b5785f-", Namespace:"calico-system", SelfLink:"", UID:"b11dd849-e38b-40e1-a2d3-0061a9f777d6", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6755b5785f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"", Pod:"calico-kube-controllers-6755b5785f-427k6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6b52c6fba2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:17.699004 containerd[1911]: 2025-12-12 17:28:17.645 [INFO][5185] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.133/32] ContainerID="569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d" Namespace="calico-system" Pod="calico-kube-controllers-6755b5785f-427k6" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--kube--controllers--6755b5785f--427k6-eth0" Dec 12 17:28:17.699004 containerd[1911]: 2025-12-12 17:28:17.645 [INFO][5185] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b52c6fba2b ContainerID="569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d" Namespace="calico-system" Pod="calico-kube-controllers-6755b5785f-427k6" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--kube--controllers--6755b5785f--427k6-eth0" Dec 12 17:28:17.699004 containerd[1911]: 2025-12-12 17:28:17.654 [INFO][5185] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d" Namespace="calico-system" Pod="calico-kube-controllers-6755b5785f-427k6" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--kube--controllers--6755b5785f--427k6-eth0" Dec 12 17:28:17.699251 containerd[1911]: 2025-12-12 17:28:17.656 [INFO][5185] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d" Namespace="calico-system" Pod="calico-kube-controllers-6755b5785f-427k6" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--kube--controllers--6755b5785f--427k6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-calico--kube--controllers--6755b5785f--427k6-eth0", GenerateName:"calico-kube-controllers-6755b5785f-", Namespace:"calico-system", SelfLink:"", UID:"b11dd849-e38b-40e1-a2d3-0061a9f777d6", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6755b5785f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d", Pod:"calico-kube-controllers-6755b5785f-427k6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6b52c6fba2b", MAC:"6e:47:37:e2:07:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:17.699397 containerd[1911]: 2025-12-12 17:28:17.687 [INFO][5185] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d" Namespace="calico-system" Pod="calico-kube-controllers-6755b5785f-427k6" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--kube--controllers--6755b5785f--427k6-eth0" Dec 12 17:28:17.737000 audit[5218]: NETFILTER_CFG table=filter:132 family=2 entries=44 op=nft_register_chain pid=5218 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:28:17.737000 audit[5218]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21936 a0=3 a1=ffffe48a0110 a2=0 a3=ffffa30a4fa8 items=0 ppid=4696 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.737000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:28:17.777709 containerd[1911]: time="2025-12-12T17:28:17.777382957Z" level=info msg="connecting to shim 569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d" address="unix:///run/containerd/s/0907bf4823910a5139a8b16c72f786cac6d900daf336e6c23da1ad828ac72bbb" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:28:17.829117 systemd[1]: Started cri-containerd-569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d.scope - libcontainer container 569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d. Dec 12 17:28:17.866000 audit: BPF prog-id=232 op=LOAD Dec 12 17:28:17.867000 audit: BPF prog-id=233 op=LOAD Dec 12 17:28:17.867000 audit[5240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=5228 pid=5240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536396132373732663034613638353464326661666334623532316463 Dec 12 17:28:17.867000 audit: BPF prog-id=233 op=UNLOAD Dec 12 17:28:17.867000 audit[5240]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5228 pid=5240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536396132373732663034613638353464326661666334623532316463 Dec 12 17:28:17.867000 audit: BPF prog-id=234 op=LOAD Dec 12 17:28:17.867000 audit[5240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=5228 pid=5240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536396132373732663034613638353464326661666334623532316463 Dec 12 17:28:17.867000 audit: BPF prog-id=235 op=LOAD Dec 12 17:28:17.867000 audit[5240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=5228 pid=5240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536396132373732663034613638353464326661666334623532316463 Dec 12 17:28:17.868000 audit: BPF prog-id=235 op=UNLOAD Dec 12 17:28:17.868000 audit[5240]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5228 pid=5240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536396132373732663034613638353464326661666334623532316463 Dec 12 17:28:17.868000 audit: BPF prog-id=234 op=UNLOAD Dec 12 17:28:17.868000 audit[5240]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5228 pid=5240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536396132373732663034613638353464326661666334623532316463 Dec 12 17:28:17.868000 audit: BPF prog-id=236 op=LOAD Dec 12 17:28:17.868000 audit[5240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=5228 pid=5240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536396132373732663034613638353464326661666334623532316463 Dec 12 17:28:17.979064 containerd[1911]: time="2025-12-12T17:28:17.978440102Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:17.981781 containerd[1911]: time="2025-12-12T17:28:17.981054134Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:28:17.981781 containerd[1911]: time="2025-12-12T17:28:17.981222554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:17.982052 kubelet[3528]: E1212 17:28:17.981470 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:28:17.982052 kubelet[3528]: E1212 17:28:17.981535 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:28:17.985480 kubelet[3528]: E1212 17:28:17.982109 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v6zf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fxswz_calico-system(4993f81d-df62-4e56-b3d3-f820e3c156d6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:17.985480 kubelet[3528]: E1212 17:28:17.984176 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fxswz" podUID="4993f81d-df62-4e56-b3d3-f820e3c156d6" Dec 12 17:28:17.986887 containerd[1911]: time="2025-12-12T17:28:17.985039766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:28:17.993216 containerd[1911]: time="2025-12-12T17:28:17.993126074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6755b5785f-427k6,Uid:b11dd849-e38b-40e1-a2d3-0061a9f777d6,Namespace:calico-system,Attempt:0,} returns sandbox id \"569a2772f04a6854d2fafc4b521dc824d4bce69dbbdd4eb43889ad34aeab725d\"" Dec 12 17:28:18.044145 systemd-networkd[1628]: cali0fb89285d33: Gained IPv6LL Dec 12 17:28:18.304917 containerd[1911]: time="2025-12-12T17:28:18.304699128Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:18.307865 containerd[1911]: time="2025-12-12T17:28:18.307636488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:28:18.307865 containerd[1911]: time="2025-12-12T17:28:18.307801512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:18.308327 kubelet[3528]: E1212 17:28:18.308268 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:28:18.308438 kubelet[3528]: E1212 17:28:18.308344 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:28:18.309028 containerd[1911]: time="2025-12-12T17:28:18.308972748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:28:18.309157 kubelet[3528]: E1212 17:28:18.308916 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzhvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5phxm_calico-system(5ad89cf6-178c-4c89-9906-56d3d4e0dba0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:18.310732 kubelet[3528]: E1212 17:28:18.310582 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5phxm" podUID="5ad89cf6-178c-4c89-9906-56d3d4e0dba0" Dec 12 17:28:18.324231 containerd[1911]: time="2025-12-12T17:28:18.324157080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h6vwb,Uid:c263292c-2d61-41bc-b009-e2278ae54431,Namespace:kube-system,Attempt:0,}" Dec 12 17:28:18.324487 containerd[1911]: time="2025-12-12T17:28:18.324443256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v8vwg,Uid:ee3e2608-b96a-4e33-97f4-50403b2c2ff6,Namespace:kube-system,Attempt:0,}" Dec 12 17:28:18.324928 containerd[1911]: time="2025-12-12T17:28:18.324877836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5446659b-48cdq,Uid:4f54519e-b15c-42cf-aa0f-f8649bda1c94,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:28:18.615792 containerd[1911]: time="2025-12-12T17:28:18.615726925Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:18.620795 containerd[1911]: time="2025-12-12T17:28:18.619889569Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:28:18.621870 containerd[1911]: time="2025-12-12T17:28:18.621007957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:18.623339 kubelet[3528]: E1212 17:28:18.623016 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:28:18.624609 kubelet[3528]: E1212 17:28:18.623387 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:28:18.625762 kubelet[3528]: E1212 17:28:18.624144 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pq5rm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6755b5785f-427k6_calico-system(b11dd849-e38b-40e1-a2d3-0061a9f777d6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:18.627035 kubelet[3528]: E1212 17:28:18.626959 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6755b5785f-427k6" podUID="b11dd849-e38b-40e1-a2d3-0061a9f777d6" Dec 12 17:28:18.685228 systemd-networkd[1628]: cali6b52c6fba2b: Gained IPv6LL Dec 12 17:28:18.715159 systemd-networkd[1628]: calia05a57c583c: Link UP Dec 12 17:28:18.715558 systemd-networkd[1628]: calia05a57c583c: Gained carrier Dec 12 17:28:18.789749 kubelet[3528]: E1212 17:28:18.789385 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6755b5785f-427k6" podUID="b11dd849-e38b-40e1-a2d3-0061a9f777d6" Dec 12 17:28:18.790541 kubelet[3528]: E1212 17:28:18.790338 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fxswz" podUID="4993f81d-df62-4e56-b3d3-f820e3c156d6" Dec 12 17:28:18.793787 containerd[1911]: 2025-12-12 17:28:18.471 [INFO][5274] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--228-k8s-coredns--674b8bbfcf--h6vwb-eth0 coredns-674b8bbfcf- kube-system c263292c-2d61-41bc-b009-e2278ae54431 876 0 2025-12-12 17:27:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-228 coredns-674b8bbfcf-h6vwb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia05a57c583c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db" Namespace="kube-system" Pod="coredns-674b8bbfcf-h6vwb" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--674b8bbfcf--h6vwb-" Dec 12 17:28:18.793787 containerd[1911]: 2025-12-12 17:28:18.471 [INFO][5274] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db" Namespace="kube-system" Pod="coredns-674b8bbfcf-h6vwb" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--674b8bbfcf--h6vwb-eth0" Dec 12 17:28:18.793787 containerd[1911]: 2025-12-12 17:28:18.598 [INFO][5305] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db" HandleID="k8s-pod-network.cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db" Workload="ip--172--31--17--228-k8s-coredns--674b8bbfcf--h6vwb-eth0" Dec 12 17:28:18.795099 containerd[1911]: 2025-12-12 17:28:18.598 [INFO][5305] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db" HandleID="k8s-pod-network.cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db" Workload="ip--172--31--17--228-k8s-coredns--674b8bbfcf--h6vwb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d860), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-228", "pod":"coredns-674b8bbfcf-h6vwb", "timestamp":"2025-12-12 17:28:18.598281709 +0000 UTC"}, Hostname:"ip-172-31-17-228", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:28:18.795099 containerd[1911]: 2025-12-12 17:28:18.598 [INFO][5305] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:28:18.795099 containerd[1911]: 2025-12-12 17:28:18.599 [INFO][5305] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:28:18.795099 containerd[1911]: 2025-12-12 17:28:18.599 [INFO][5305] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-228' Dec 12 17:28:18.795099 containerd[1911]: 2025-12-12 17:28:18.627 [INFO][5305] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db" host="ip-172-31-17-228" Dec 12 17:28:18.795099 containerd[1911]: 2025-12-12 17:28:18.640 [INFO][5305] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-228" Dec 12 17:28:18.795099 containerd[1911]: 2025-12-12 17:28:18.653 [INFO][5305] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="ip-172-31-17-228" Dec 12 17:28:18.795099 containerd[1911]: 2025-12-12 17:28:18.658 [INFO][5305] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="ip-172-31-17-228" Dec 12 17:28:18.795099 containerd[1911]: 2025-12-12 17:28:18.666 [INFO][5305] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ip-172-31-17-228" Dec 12 17:28:18.797801 containerd[1911]: 2025-12-12 17:28:18.666 [INFO][5305] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db" host="ip-172-31-17-228" Dec 12 17:28:18.797801 containerd[1911]: 2025-12-12 17:28:18.668 [INFO][5305] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db Dec 12 17:28:18.797801 containerd[1911]: 2025-12-12 17:28:18.676 [INFO][5305] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db" host="ip-172-31-17-228" Dec 12 17:28:18.797801 containerd[1911]: 2025-12-12 17:28:18.691 [INFO][5305] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.134/26] block=192.168.52.128/26 handle="k8s-pod-network.cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db" host="ip-172-31-17-228" Dec 12 17:28:18.797801 containerd[1911]: 2025-12-12 17:28:18.691 [INFO][5305] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.134/26] handle="k8s-pod-network.cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db" host="ip-172-31-17-228" Dec 12 17:28:18.797801 containerd[1911]: 2025-12-12 17:28:18.691 [INFO][5305] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:28:18.797801 containerd[1911]: 2025-12-12 17:28:18.691 [INFO][5305] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.134/26] IPv6=[] ContainerID="cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db" HandleID="k8s-pod-network.cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db" Workload="ip--172--31--17--228-k8s-coredns--674b8bbfcf--h6vwb-eth0" Dec 12 17:28:18.799308 containerd[1911]: 2025-12-12 17:28:18.703 [INFO][5274] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db" Namespace="kube-system" Pod="coredns-674b8bbfcf-h6vwb" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--674b8bbfcf--h6vwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-coredns--674b8bbfcf--h6vwb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c263292c-2d61-41bc-b009-e2278ae54431", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"", Pod:"coredns-674b8bbfcf-h6vwb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia05a57c583c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:18.799308 containerd[1911]: 2025-12-12 17:28:18.703 [INFO][5274] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.134/32] ContainerID="cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db" Namespace="kube-system" Pod="coredns-674b8bbfcf-h6vwb" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--674b8bbfcf--h6vwb-eth0" Dec 12 17:28:18.799308 containerd[1911]: 2025-12-12 17:28:18.704 [INFO][5274] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia05a57c583c ContainerID="cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db" Namespace="kube-system" Pod="coredns-674b8bbfcf-h6vwb" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--674b8bbfcf--h6vwb-eth0" Dec 12 17:28:18.799308 containerd[1911]: 2025-12-12 17:28:18.714 [INFO][5274] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db" Namespace="kube-system" Pod="coredns-674b8bbfcf-h6vwb" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--674b8bbfcf--h6vwb-eth0" Dec 12 17:28:18.799308 containerd[1911]: 2025-12-12 17:28:18.720 [INFO][5274] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db" Namespace="kube-system" Pod="coredns-674b8bbfcf-h6vwb" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--674b8bbfcf--h6vwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-coredns--674b8bbfcf--h6vwb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c263292c-2d61-41bc-b009-e2278ae54431", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db", Pod:"coredns-674b8bbfcf-h6vwb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia05a57c583c", MAC:"56:13:1c:82:db:10", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:18.799308 containerd[1911]: 2025-12-12 17:28:18.759 [INFO][5274] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db" Namespace="kube-system" Pod="coredns-674b8bbfcf-h6vwb" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--674b8bbfcf--h6vwb-eth0" Dec 12 17:28:18.806865 kubelet[3528]: E1212 17:28:18.805890 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5phxm" podUID="5ad89cf6-178c-4c89-9906-56d3d4e0dba0" Dec 12 17:28:18.912891 containerd[1911]: time="2025-12-12T17:28:18.911520795Z" level=info msg="connecting to shim cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db" address="unix:///run/containerd/s/99d7adb4abc9c9e626439cafde73e423b956376747542762bd6fa78485ba5733" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:28:18.929759 systemd-networkd[1628]: cali5f5b02f0c57: Link UP Dec 12 17:28:18.939841 systemd-networkd[1628]: cali5f5b02f0c57: Gained carrier Dec 12 17:28:18.941635 systemd-networkd[1628]: cali58afbb977a8: Gained IPv6LL Dec 12 17:28:19.015893 containerd[1911]: 2025-12-12 17:28:18.520 [INFO][5267] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--228-k8s-coredns--674b8bbfcf--v8vwg-eth0 coredns-674b8bbfcf- kube-system ee3e2608-b96a-4e33-97f4-50403b2c2ff6 875 0 2025-12-12 17:27:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-228 coredns-674b8bbfcf-v8vwg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5f5b02f0c57 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367" Namespace="kube-system" Pod="coredns-674b8bbfcf-v8vwg" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--674b8bbfcf--v8vwg-" Dec 12 17:28:19.015893 containerd[1911]: 2025-12-12 17:28:18.520 [INFO][5267] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367" Namespace="kube-system" Pod="coredns-674b8bbfcf-v8vwg" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--674b8bbfcf--v8vwg-eth0" Dec 12 17:28:19.015893 containerd[1911]: 2025-12-12 17:28:18.637 [INFO][5313] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367" HandleID="k8s-pod-network.c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367" Workload="ip--172--31--17--228-k8s-coredns--674b8bbfcf--v8vwg-eth0" Dec 12 17:28:19.015893 containerd[1911]: 2025-12-12 17:28:18.638 [INFO][5313] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367" HandleID="k8s-pod-network.c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367" Workload="ip--172--31--17--228-k8s-coredns--674b8bbfcf--v8vwg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000231d50), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-228", "pod":"coredns-674b8bbfcf-v8vwg", "timestamp":"2025-12-12 17:28:18.637042634 +0000 UTC"}, Hostname:"ip-172-31-17-228", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:28:19.015893 containerd[1911]: 2025-12-12 17:28:18.638 [INFO][5313] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:28:19.015893 containerd[1911]: 2025-12-12 17:28:18.691 [INFO][5313] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:28:19.015893 containerd[1911]: 2025-12-12 17:28:18.691 [INFO][5313] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-228' Dec 12 17:28:19.015893 containerd[1911]: 2025-12-12 17:28:18.736 [INFO][5313] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367" host="ip-172-31-17-228" Dec 12 17:28:19.015893 containerd[1911]: 2025-12-12 17:28:18.766 [INFO][5313] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-228" Dec 12 17:28:19.015893 containerd[1911]: 2025-12-12 17:28:18.811 [INFO][5313] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="ip-172-31-17-228" Dec 12 17:28:19.015893 containerd[1911]: 2025-12-12 17:28:18.822 [INFO][5313] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="ip-172-31-17-228" Dec 12 17:28:19.015893 containerd[1911]: 2025-12-12 17:28:18.831 [INFO][5313] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ip-172-31-17-228" Dec 12 17:28:19.015893 containerd[1911]: 2025-12-12 17:28:18.831 [INFO][5313] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367" host="ip-172-31-17-228" Dec 12 17:28:19.015893 containerd[1911]: 2025-12-12 17:28:18.840 [INFO][5313] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367 Dec 12 17:28:19.015893 containerd[1911]: 2025-12-12 17:28:18.858 [INFO][5313] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367" host="ip-172-31-17-228" Dec 12 17:28:19.015893 containerd[1911]: 2025-12-12 17:28:18.898 [INFO][5313] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.135/26] block=192.168.52.128/26 handle="k8s-pod-network.c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367" host="ip-172-31-17-228" Dec 12 17:28:19.015893 containerd[1911]: 2025-12-12 17:28:18.898 [INFO][5313] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.135/26] handle="k8s-pod-network.c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367" host="ip-172-31-17-228" Dec 12 17:28:19.015893 containerd[1911]: 2025-12-12 17:28:18.898 [INFO][5313] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:28:19.015893 containerd[1911]: 2025-12-12 17:28:18.898 [INFO][5313] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.135/26] IPv6=[] ContainerID="c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367" HandleID="k8s-pod-network.c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367" Workload="ip--172--31--17--228-k8s-coredns--674b8bbfcf--v8vwg-eth0" Dec 12 17:28:19.019809 containerd[1911]: 2025-12-12 17:28:18.909 [INFO][5267] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367" Namespace="kube-system" Pod="coredns-674b8bbfcf-v8vwg" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--674b8bbfcf--v8vwg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-coredns--674b8bbfcf--v8vwg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ee3e2608-b96a-4e33-97f4-50403b2c2ff6", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"", Pod:"coredns-674b8bbfcf-v8vwg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f5b02f0c57", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:19.019809 containerd[1911]: 2025-12-12 17:28:18.909 [INFO][5267] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.135/32] ContainerID="c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367" Namespace="kube-system" Pod="coredns-674b8bbfcf-v8vwg" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--674b8bbfcf--v8vwg-eth0" Dec 12 17:28:19.019809 containerd[1911]: 2025-12-12 17:28:18.909 [INFO][5267] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f5b02f0c57 ContainerID="c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367" Namespace="kube-system" Pod="coredns-674b8bbfcf-v8vwg" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--674b8bbfcf--v8vwg-eth0" Dec 12 17:28:19.019809 containerd[1911]: 2025-12-12 17:28:18.939 [INFO][5267] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367" Namespace="kube-system" Pod="coredns-674b8bbfcf-v8vwg" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--674b8bbfcf--v8vwg-eth0" Dec 12 17:28:19.019809 containerd[1911]: 2025-12-12 17:28:18.945 [INFO][5267] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367" Namespace="kube-system" Pod="coredns-674b8bbfcf-v8vwg" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--674b8bbfcf--v8vwg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-coredns--674b8bbfcf--v8vwg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ee3e2608-b96a-4e33-97f4-50403b2c2ff6", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367", Pod:"coredns-674b8bbfcf-v8vwg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f5b02f0c57", MAC:"46:68:a4:e2:b9:d3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:19.019809 containerd[1911]: 2025-12-12 17:28:18.995 [INFO][5267] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367" Namespace="kube-system" Pod="coredns-674b8bbfcf-v8vwg" WorkloadEndpoint="ip--172--31--17--228-k8s-coredns--674b8bbfcf--v8vwg-eth0" Dec 12 17:28:19.130202 systemd[1]: Started cri-containerd-cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db.scope - libcontainer container cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db. Dec 12 17:28:19.172561 systemd-networkd[1628]: cali72cb29f0745: Link UP Dec 12 17:28:19.178507 systemd-networkd[1628]: cali72cb29f0745: Gained carrier Dec 12 17:28:19.182420 containerd[1911]: time="2025-12-12T17:28:19.182163612Z" level=info msg="connecting to shim c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367" address="unix:///run/containerd/s/a91122a2aae85fc6bac202abfdd8aa9c85ef1e244c2f3872eafc76718cbb84c8" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:28:19.180000 audit[5367]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=5367 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:19.180000 audit[5367]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffffe91f170 a2=0 a3=1 items=0 ppid=3675 pid=5367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.180000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:19.194000 audit[5367]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=5367 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:19.194000 audit[5367]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffffe91f170 a2=0 a3=1 items=0 ppid=3675 pid=5367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.194000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:19.248000 audit: BPF prog-id=237 op=LOAD Dec 12 17:28:19.249000 audit: BPF prog-id=238 op=LOAD Dec 12 17:28:19.249000 audit[5357]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=5345 pid=5357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.249000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364646466343036626533323565656536646130653866333031373232 Dec 12 17:28:19.251000 audit: BPF prog-id=238 op=UNLOAD Dec 12 17:28:19.251000 audit[5357]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5345 pid=5357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364646466343036626533323565656536646130653866333031373232 Dec 12 17:28:19.251000 audit: BPF prog-id=239 op=LOAD Dec 12 17:28:19.251000 audit[5357]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=5345 pid=5357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364646466343036626533323565656536646130653866333031373232 Dec 12 17:28:19.251000 audit: BPF prog-id=240 op=LOAD Dec 12 17:28:19.251000 audit[5357]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=5345 pid=5357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364646466343036626533323565656536646130653866333031373232 Dec 12 17:28:19.251000 audit: BPF prog-id=240 op=UNLOAD Dec 12 17:28:19.251000 audit[5357]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5345 pid=5357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364646466343036626533323565656536646130653866333031373232 Dec 12 17:28:19.251000 audit: BPF prog-id=239 op=UNLOAD Dec 12 17:28:19.251000 audit[5357]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5345 pid=5357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364646466343036626533323565656536646130653866333031373232 Dec 12 17:28:19.252000 audit: BPF prog-id=241 op=LOAD Dec 12 17:28:19.252000 audit[5357]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=5345 pid=5357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364646466343036626533323565656536646130653866333031373232 Dec 12 17:28:19.257358 containerd[1911]: 2025-12-12 17:28:18.555 [INFO][5280] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--48cdq-eth0 calico-apiserver-7b5446659b- calico-apiserver 4f54519e-b15c-42cf-aa0f-f8649bda1c94 877 0 2025-12-12 17:27:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b5446659b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-228 calico-apiserver-7b5446659b-48cdq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali72cb29f0745 [] [] }} ContainerID="4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de" Namespace="calico-apiserver" Pod="calico-apiserver-7b5446659b-48cdq" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--48cdq-" Dec 12 17:28:19.257358 containerd[1911]: 2025-12-12 17:28:18.556 [INFO][5280] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de" Namespace="calico-apiserver" Pod="calico-apiserver-7b5446659b-48cdq" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--48cdq-eth0" Dec 12 17:28:19.257358 containerd[1911]: 2025-12-12 17:28:18.664 [INFO][5319] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de" HandleID="k8s-pod-network.4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de" Workload="ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--48cdq-eth0" Dec 12 17:28:19.257358 containerd[1911]: 2025-12-12 17:28:18.664 [INFO][5319] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de" HandleID="k8s-pod-network.4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de" Workload="ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--48cdq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000369990), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-17-228", "pod":"calico-apiserver-7b5446659b-48cdq", "timestamp":"2025-12-12 17:28:18.664043582 +0000 UTC"}, Hostname:"ip-172-31-17-228", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:28:19.257358 containerd[1911]: 2025-12-12 17:28:18.664 [INFO][5319] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:28:19.257358 containerd[1911]: 2025-12-12 17:28:18.899 [INFO][5319] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:28:19.257358 containerd[1911]: 2025-12-12 17:28:18.900 [INFO][5319] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-228' Dec 12 17:28:19.257358 containerd[1911]: 2025-12-12 17:28:18.962 [INFO][5319] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de" host="ip-172-31-17-228" Dec 12 17:28:19.257358 containerd[1911]: 2025-12-12 17:28:19.001 [INFO][5319] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-228" Dec 12 17:28:19.257358 containerd[1911]: 2025-12-12 17:28:19.020 [INFO][5319] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="ip-172-31-17-228" Dec 12 17:28:19.257358 containerd[1911]: 2025-12-12 17:28:19.035 [INFO][5319] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="ip-172-31-17-228" Dec 12 17:28:19.257358 containerd[1911]: 2025-12-12 17:28:19.047 [INFO][5319] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ip-172-31-17-228" Dec 12 17:28:19.257358 containerd[1911]: 2025-12-12 17:28:19.047 [INFO][5319] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de" host="ip-172-31-17-228" Dec 12 17:28:19.257358 containerd[1911]: 2025-12-12 17:28:19.069 [INFO][5319] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de Dec 12 17:28:19.257358 containerd[1911]: 2025-12-12 17:28:19.091 [INFO][5319] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de" host="ip-172-31-17-228" Dec 12 17:28:19.257358 containerd[1911]: 2025-12-12 17:28:19.127 [INFO][5319] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.136/26] block=192.168.52.128/26 handle="k8s-pod-network.4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de" host="ip-172-31-17-228" Dec 12 17:28:19.257358 containerd[1911]: 2025-12-12 17:28:19.127 [INFO][5319] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.136/26] handle="k8s-pod-network.4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de" host="ip-172-31-17-228" Dec 12 17:28:19.257358 containerd[1911]: 2025-12-12 17:28:19.127 [INFO][5319] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:28:19.257358 containerd[1911]: 2025-12-12 17:28:19.127 [INFO][5319] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.136/26] IPv6=[] ContainerID="4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de" HandleID="k8s-pod-network.4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de" Workload="ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--48cdq-eth0" Dec 12 17:28:19.259301 containerd[1911]: 2025-12-12 17:28:19.146 [INFO][5280] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de" Namespace="calico-apiserver" Pod="calico-apiserver-7b5446659b-48cdq" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--48cdq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--48cdq-eth0", GenerateName:"calico-apiserver-7b5446659b-", Namespace:"calico-apiserver", SelfLink:"", UID:"4f54519e-b15c-42cf-aa0f-f8649bda1c94", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b5446659b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"", Pod:"calico-apiserver-7b5446659b-48cdq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72cb29f0745", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:19.259301 containerd[1911]: 2025-12-12 17:28:19.146 [INFO][5280] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.136/32] ContainerID="4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de" Namespace="calico-apiserver" Pod="calico-apiserver-7b5446659b-48cdq" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--48cdq-eth0" Dec 12 17:28:19.259301 containerd[1911]: 2025-12-12 17:28:19.146 [INFO][5280] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72cb29f0745 ContainerID="4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de" Namespace="calico-apiserver" Pod="calico-apiserver-7b5446659b-48cdq" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--48cdq-eth0" Dec 12 17:28:19.259301 containerd[1911]: 2025-12-12 17:28:19.194 [INFO][5280] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de" Namespace="calico-apiserver" Pod="calico-apiserver-7b5446659b-48cdq" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--48cdq-eth0" Dec 12 17:28:19.259301 containerd[1911]: 2025-12-12 17:28:19.198 [INFO][5280] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de" Namespace="calico-apiserver" Pod="calico-apiserver-7b5446659b-48cdq" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--48cdq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--48cdq-eth0", GenerateName:"calico-apiserver-7b5446659b-", Namespace:"calico-apiserver", SelfLink:"", UID:"4f54519e-b15c-42cf-aa0f-f8649bda1c94", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b5446659b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-228", ContainerID:"4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de", Pod:"calico-apiserver-7b5446659b-48cdq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72cb29f0745", MAC:"46:78:02:5a:b1:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:19.259301 containerd[1911]: 2025-12-12 17:28:19.243 [INFO][5280] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de" Namespace="calico-apiserver" Pod="calico-apiserver-7b5446659b-48cdq" WorkloadEndpoint="ip--172--31--17--228-k8s-calico--apiserver--7b5446659b--48cdq-eth0" Dec 12 17:28:19.308038 systemd[1]: Started cri-containerd-c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367.scope - libcontainer container c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367. Dec 12 17:28:19.403967 containerd[1911]: time="2025-12-12T17:28:19.403892293Z" level=info msg="connecting to shim 4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de" address="unix:///run/containerd/s/689226f73ccf92c11ca5c460029fd28b3163f2781acb0be7b0a3c95970eaf3d5" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:28:19.457000 audit: BPF prog-id=242 op=LOAD Dec 12 17:28:19.465000 audit: BPF prog-id=243 op=LOAD Dec 12 17:28:19.465000 audit[5403]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=5385 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.465000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332396437646265303262663238663761353663326238666132633536 Dec 12 17:28:19.467000 audit: BPF prog-id=243 op=UNLOAD Dec 12 17:28:19.467000 audit[5403]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5385 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332396437646265303262663238663761353663326238666132633536 Dec 12 17:28:19.472000 audit: BPF prog-id=244 op=LOAD Dec 12 17:28:19.472000 audit[5403]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=5385 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332396437646265303262663238663761353663326238666132633536 Dec 12 17:28:19.478000 audit: BPF prog-id=245 op=LOAD Dec 12 17:28:19.478000 audit[5403]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=5385 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332396437646265303262663238663761353663326238666132633536 Dec 12 17:28:19.480000 audit: BPF prog-id=245 op=UNLOAD Dec 12 17:28:19.480000 audit[5403]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5385 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332396437646265303262663238663761353663326238666132633536 Dec 12 17:28:19.483000 audit: BPF prog-id=244 op=UNLOAD Dec 12 17:28:19.483000 audit[5403]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5385 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332396437646265303262663238663761353663326238666132633536 Dec 12 17:28:19.487000 audit: BPF prog-id=246 op=LOAD Dec 12 17:28:19.487000 audit[5403]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=5385 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332396437646265303262663238663761353663326238666132633536 Dec 12 17:28:19.560292 systemd[1]: Started cri-containerd-4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de.scope - libcontainer container 4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de. Dec 12 17:28:19.573778 containerd[1911]: time="2025-12-12T17:28:19.573696926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h6vwb,Uid:c263292c-2d61-41bc-b009-e2278ae54431,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db\"" Dec 12 17:28:19.604976 containerd[1911]: time="2025-12-12T17:28:19.604894574Z" level=info msg="CreateContainer within sandbox \"cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:28:19.647764 containerd[1911]: time="2025-12-12T17:28:19.644530371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v8vwg,Uid:ee3e2608-b96a-4e33-97f4-50403b2c2ff6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367\"" Dec 12 17:28:19.674732 containerd[1911]: time="2025-12-12T17:28:19.671843607Z" level=info msg="CreateContainer within sandbox \"c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:28:19.680178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount529385107.mount: Deactivated successfully. Dec 12 17:28:19.698782 containerd[1911]: time="2025-12-12T17:28:19.698070495Z" level=info msg="Container 56d8d7ebce013cd9ff2989c1452a23bd8135e79c56bc1977a8119d059b2de5d7: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:28:19.725051 containerd[1911]: time="2025-12-12T17:28:19.724887783Z" level=info msg="Container 2ac5246763ef9207588ee8b151d8961324c3ed5de6f9fbcb8093e39631612a17: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:28:19.740681 containerd[1911]: time="2025-12-12T17:28:19.740425059Z" level=info msg="CreateContainer within sandbox \"cdddf406be325eee6da0e8f3017222ad8e3d82a0da3018f1c3eba796028318db\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"56d8d7ebce013cd9ff2989c1452a23bd8135e79c56bc1977a8119d059b2de5d7\"" Dec 12 17:28:19.741000 audit: BPF prog-id=247 op=LOAD Dec 12 17:28:19.747000 audit: BPF prog-id=248 op=LOAD Dec 12 17:28:19.749453 containerd[1911]: time="2025-12-12T17:28:19.749327295Z" level=info msg="StartContainer for \"56d8d7ebce013cd9ff2989c1452a23bd8135e79c56bc1977a8119d059b2de5d7\"" Dec 12 17:28:19.747000 audit[5451]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5437 pid=5451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461326538313331643033306262643762376330336134356134373463 Dec 12 17:28:19.750000 audit: BPF prog-id=248 op=UNLOAD Dec 12 17:28:19.750000 audit[5451]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5437 pid=5451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461326538313331643033306262643762376330336134356134373463 Dec 12 17:28:19.753000 audit: BPF prog-id=249 op=LOAD Dec 12 17:28:19.753000 audit[5451]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5437 pid=5451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.753000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461326538313331643033306262643762376330336134356134373463 Dec 12 17:28:19.754000 audit: BPF prog-id=250 op=LOAD Dec 12 17:28:19.754000 audit[5451]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5437 pid=5451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.758035 containerd[1911]: time="2025-12-12T17:28:19.756752847Z" level=info msg="connecting to shim 56d8d7ebce013cd9ff2989c1452a23bd8135e79c56bc1977a8119d059b2de5d7" address="unix:///run/containerd/s/99d7adb4abc9c9e626439cafde73e423b956376747542762bd6fa78485ba5733" protocol=ttrpc version=3 Dec 12 17:28:19.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461326538313331643033306262643762376330336134356134373463 Dec 12 17:28:19.758000 audit: BPF prog-id=250 op=UNLOAD Dec 12 17:28:19.758000 audit[5451]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5437 pid=5451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461326538313331643033306262643762376330336134356134373463 Dec 12 17:28:19.758000 audit: BPF prog-id=249 op=UNLOAD Dec 12 17:28:19.758000 audit[5451]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5437 pid=5451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461326538313331643033306262643762376330336134356134373463 Dec 12 17:28:19.758000 audit: BPF prog-id=251 op=LOAD Dec 12 17:28:19.758000 audit[5451]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5437 pid=5451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461326538313331643033306262643762376330336134356134373463 Dec 12 17:28:19.758000 audit[5483]: NETFILTER_CFG table=filter:135 family=2 entries=54 op=nft_register_chain pid=5483 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:28:19.758000 audit[5483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26100 a0=3 a1=fffff9680800 a2=0 a3=ffff89b75fa8 items=0 ppid=4696 pid=5483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.758000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:28:19.775783 containerd[1911]: time="2025-12-12T17:28:19.774992835Z" level=info msg="CreateContainer within sandbox \"c29d7dbe02bf28f7a56c2b8fa2c56c776a6100bcc25df3832c70bca40eca6367\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2ac5246763ef9207588ee8b151d8961324c3ed5de6f9fbcb8093e39631612a17\"" Dec 12 17:28:19.779031 containerd[1911]: time="2025-12-12T17:28:19.778952871Z" level=info msg="StartContainer for \"2ac5246763ef9207588ee8b151d8961324c3ed5de6f9fbcb8093e39631612a17\"" Dec 12 17:28:19.789467 containerd[1911]: time="2025-12-12T17:28:19.789147927Z" level=info msg="connecting to shim 2ac5246763ef9207588ee8b151d8961324c3ed5de6f9fbcb8093e39631612a17" address="unix:///run/containerd/s/a91122a2aae85fc6bac202abfdd8aa9c85ef1e244c2f3872eafc76718cbb84c8" protocol=ttrpc version=3 Dec 12 17:28:19.830962 kubelet[3528]: E1212 17:28:19.830578 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6755b5785f-427k6" podUID="b11dd849-e38b-40e1-a2d3-0061a9f777d6" Dec 12 17:28:19.882065 systemd[1]: Started cri-containerd-56d8d7ebce013cd9ff2989c1452a23bd8135e79c56bc1977a8119d059b2de5d7.scope - libcontainer container 56d8d7ebce013cd9ff2989c1452a23bd8135e79c56bc1977a8119d059b2de5d7. Dec 12 17:28:19.927057 systemd[1]: Started cri-containerd-2ac5246763ef9207588ee8b151d8961324c3ed5de6f9fbcb8093e39631612a17.scope - libcontainer container 2ac5246763ef9207588ee8b151d8961324c3ed5de6f9fbcb8093e39631612a17. Dec 12 17:28:19.952000 audit: BPF prog-id=252 op=LOAD Dec 12 17:28:19.954000 audit: BPF prog-id=253 op=LOAD Dec 12 17:28:19.954000 audit[5485]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=5345 pid=5485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536643864376562636530313363643966663239383963313435326132 Dec 12 17:28:19.955000 audit: BPF prog-id=253 op=UNLOAD Dec 12 17:28:19.955000 audit[5485]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5345 pid=5485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536643864376562636530313363643966663239383963313435326132 Dec 12 17:28:19.955000 audit: BPF prog-id=254 op=LOAD Dec 12 17:28:19.955000 audit[5485]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=5345 pid=5485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536643864376562636530313363643966663239383963313435326132 Dec 12 17:28:19.955000 audit: BPF prog-id=255 op=LOAD Dec 12 17:28:19.955000 audit[5485]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=5345 pid=5485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536643864376562636530313363643966663239383963313435326132 Dec 12 17:28:19.956000 audit: BPF prog-id=255 op=UNLOAD Dec 12 17:28:19.956000 audit[5485]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5345 pid=5485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536643864376562636530313363643966663239383963313435326132 Dec 12 17:28:19.956000 audit: BPF prog-id=254 op=UNLOAD Dec 12 17:28:19.956000 audit[5485]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5345 pid=5485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536643864376562636530313363643966663239383963313435326132 Dec 12 17:28:19.956000 audit: BPF prog-id=256 op=LOAD Dec 12 17:28:19.956000 audit[5485]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=5345 pid=5485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536643864376562636530313363643966663239383963313435326132 Dec 12 17:28:19.989000 audit: BPF prog-id=257 op=LOAD Dec 12 17:28:19.997000 audit: BPF prog-id=258 op=LOAD Dec 12 17:28:19.997000 audit[5490]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e180 a2=98 a3=0 items=0 ppid=5385 pid=5490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261633532343637363365663932303735383865653862313531643839 Dec 12 17:28:19.998000 audit: BPF prog-id=258 op=UNLOAD Dec 12 17:28:19.998000 audit[5490]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5385 pid=5490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261633532343637363365663932303735383865653862313531643839 Dec 12 17:28:19.999000 audit: BPF prog-id=259 op=LOAD Dec 12 17:28:19.999000 audit[5490]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e3e8 a2=98 a3=0 items=0 ppid=5385 pid=5490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:19.999000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261633532343637363365663932303735383865653862313531643839 Dec 12 17:28:20.001000 audit: BPF prog-id=260 op=LOAD Dec 12 17:28:20.001000 audit[5490]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400017e168 a2=98 a3=0 items=0 ppid=5385 pid=5490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:20.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261633532343637363365663932303735383865653862313531643839 Dec 12 17:28:20.002000 audit: BPF prog-id=260 op=UNLOAD Dec 12 17:28:20.002000 audit[5490]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5385 pid=5490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:20.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261633532343637363365663932303735383865653862313531643839 Dec 12 17:28:20.002000 audit: BPF prog-id=259 op=UNLOAD Dec 12 17:28:20.002000 audit[5490]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5385 pid=5490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:20.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261633532343637363365663932303735383865653862313531643839 Dec 12 17:28:20.003000 audit: BPF prog-id=261 op=LOAD Dec 12 17:28:20.003000 audit[5490]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e648 a2=98 a3=0 items=0 ppid=5385 pid=5490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:20.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261633532343637363365663932303735383865653862313531643839 Dec 12 17:28:20.043278 containerd[1911]: time="2025-12-12T17:28:20.043200217Z" level=info msg="StartContainer for \"56d8d7ebce013cd9ff2989c1452a23bd8135e79c56bc1977a8119d059b2de5d7\" returns successfully" Dec 12 17:28:20.159541 containerd[1911]: time="2025-12-12T17:28:20.159472489Z" level=info msg="StartContainer for \"2ac5246763ef9207588ee8b151d8961324c3ed5de6f9fbcb8093e39631612a17\" returns successfully" Dec 12 17:28:20.146000 audit[5545]: NETFILTER_CFG table=filter:136 family=2 entries=83 op=nft_register_chain pid=5545 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:28:20.146000 audit[5545]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=44544 a0=3 a1=ffffd7856500 a2=0 a3=ffffa262cfa8 items=0 ppid=4696 pid=5545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:20.146000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:28:20.197679 containerd[1911]: time="2025-12-12T17:28:20.197472757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5446659b-48cdq,Uid:4f54519e-b15c-42cf-aa0f-f8649bda1c94,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4a2e8131d030bbd7b7c03a45a474cdeeb37e936c9c1b810276383338cc2c95de\"" Dec 12 17:28:20.211064 containerd[1911]: time="2025-12-12T17:28:20.210925945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:28:20.348054 systemd-networkd[1628]: cali5f5b02f0c57: Gained IPv6LL Dec 12 17:28:20.359393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1701934920.mount: Deactivated successfully. Dec 12 17:28:20.516654 containerd[1911]: time="2025-12-12T17:28:20.516552279Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:20.519875 containerd[1911]: time="2025-12-12T17:28:20.519770787Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:20.519875 containerd[1911]: time="2025-12-12T17:28:20.519792687Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:28:20.520633 kubelet[3528]: E1212 17:28:20.520346 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:20.520633 kubelet[3528]: E1212 17:28:20.520410 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:20.521589 kubelet[3528]: E1212 17:28:20.521493 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvxv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b5446659b-48cdq_calico-apiserver(4f54519e-b15c-42cf-aa0f-f8649bda1c94): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:20.522974 kubelet[3528]: E1212 17:28:20.522891 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5446659b-48cdq" podUID="4f54519e-b15c-42cf-aa0f-f8649bda1c94" Dec 12 17:28:20.732929 systemd-networkd[1628]: calia05a57c583c: Gained IPv6LL Dec 12 17:28:20.848215 kubelet[3528]: E1212 17:28:20.848146 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5446659b-48cdq" podUID="4f54519e-b15c-42cf-aa0f-f8649bda1c94" Dec 12 17:28:20.912958 kubelet[3528]: I1212 17:28:20.912867 3528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-v8vwg" podStartSLOduration=57.912843341 podStartE2EDuration="57.912843341s" podCreationTimestamp="2025-12-12 17:27:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:28:20.883648793 +0000 UTC m=+61.847930240" watchObservedRunningTime="2025-12-12 17:28:20.912843341 +0000 UTC m=+61.877124776" Dec 12 17:28:20.971000 audit[5568]: NETFILTER_CFG table=filter:137 family=2 entries=20 op=nft_register_rule pid=5568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:20.974232 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 12 17:28:20.974318 kernel: audit: type=1325 audit(1765560500.971:754): table=filter:137 family=2 entries=20 op=nft_register_rule pid=5568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:20.985587 kernel: audit: type=1300 audit(1765560500.971:754): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc5185a20 a2=0 a3=1 items=0 ppid=3675 pid=5568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:20.971000 audit[5568]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc5185a20 a2=0 a3=1 items=0 ppid=3675 pid=5568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:20.971000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:20.991811 kernel: audit: type=1327 audit(1765560500.971:754): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:20.995000 audit[5568]: NETFILTER_CFG table=nat:138 family=2 entries=14 op=nft_register_rule pid=5568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:20.995000 audit[5568]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffc5185a20 a2=0 a3=1 items=0 ppid=3675 pid=5568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:21.008508 kernel: audit: type=1325 audit(1765560500.995:755): table=nat:138 family=2 entries=14 op=nft_register_rule pid=5568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:21.008624 kernel: audit: type=1300 audit(1765560500.995:755): arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffc5185a20 a2=0 a3=1 items=0 ppid=3675 pid=5568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:20.995000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:21.014717 kernel: audit: type=1327 audit(1765560500.995:755): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:21.052352 systemd-networkd[1628]: cali72cb29f0745: Gained IPv6LL Dec 12 17:28:21.198000 audit[5570]: NETFILTER_CFG table=filter:139 family=2 entries=20 op=nft_register_rule pid=5570 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:21.211080 kernel: audit: type=1325 audit(1765560501.198:756): table=filter:139 family=2 entries=20 op=nft_register_rule pid=5570 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:21.211226 kernel: audit: type=1300 audit(1765560501.198:756): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe2b21eb0 a2=0 a3=1 items=0 ppid=3675 pid=5570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:21.198000 audit[5570]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe2b21eb0 a2=0 a3=1 items=0 ppid=3675 pid=5570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:21.198000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:21.216312 kernel: audit: type=1327 audit(1765560501.198:756): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:21.217000 audit[5570]: NETFILTER_CFG table=nat:140 family=2 entries=14 op=nft_register_rule pid=5570 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:21.223738 kernel: audit: type=1325 audit(1765560501.217:757): table=nat:140 family=2 entries=14 op=nft_register_rule pid=5570 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:21.217000 audit[5570]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe2b21eb0 a2=0 a3=1 items=0 ppid=3675 pid=5570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:21.217000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:21.865448 kubelet[3528]: E1212 17:28:21.864732 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5446659b-48cdq" podUID="4f54519e-b15c-42cf-aa0f-f8649bda1c94" Dec 12 17:28:21.889258 kubelet[3528]: I1212 17:28:21.889029 3528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-h6vwb" podStartSLOduration=58.889005378 podStartE2EDuration="58.889005378s" podCreationTimestamp="2025-12-12 17:27:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:28:20.966145109 +0000 UTC m=+61.930426604" watchObservedRunningTime="2025-12-12 17:28:21.889005378 +0000 UTC m=+62.853286873" Dec 12 17:28:21.975000 audit[5580]: NETFILTER_CFG table=filter:141 family=2 entries=17 op=nft_register_rule pid=5580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:21.975000 audit[5580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd5be0a70 a2=0 a3=1 items=0 ppid=3675 pid=5580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:21.975000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:22.001000 audit[5580]: NETFILTER_CFG table=nat:142 family=2 entries=35 op=nft_register_chain pid=5580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:22.001000 audit[5580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffd5be0a70 a2=0 a3=1 items=0 ppid=3675 pid=5580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:22.001000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:23.033000 audit[5582]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:23.033000 audit[5582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcea61960 a2=0 a3=1 items=0 ppid=3675 pid=5582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:23.033000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:23.072000 audit[5582]: NETFILTER_CFG table=nat:144 family=2 entries=56 op=nft_register_chain pid=5582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:23.072000 audit[5582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffcea61960 a2=0 a3=1 items=0 ppid=3675 pid=5582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:23.072000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:23.561091 ntpd[1877]: Listen normally on 6 vxlan.calico 192.168.52.128:123 Dec 12 17:28:23.561195 ntpd[1877]: Listen normally on 7 cali8614af73825 [fe80::ecee:eeff:feee:eeee%4]:123 Dec 12 17:28:23.561787 ntpd[1877]: 12 Dec 17:28:23 ntpd[1877]: Listen normally on 6 vxlan.calico 192.168.52.128:123 Dec 12 17:28:23.561787 ntpd[1877]: 12 Dec 17:28:23 ntpd[1877]: Listen normally on 7 cali8614af73825 [fe80::ecee:eeff:feee:eeee%4]:123 Dec 12 17:28:23.561787 ntpd[1877]: 12 Dec 17:28:23 ntpd[1877]: Listen normally on 8 vxlan.calico [fe80::64e8:95ff:feff:fb88%5]:123 Dec 12 17:28:23.561787 ntpd[1877]: 12 Dec 17:28:23 ntpd[1877]: Listen normally on 9 cali8e50f429e53 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 12 17:28:23.561787 ntpd[1877]: 12 Dec 17:28:23 ntpd[1877]: Listen normally on 10 cali0fb89285d33 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 12 17:28:23.561787 ntpd[1877]: 12 Dec 17:28:23 ntpd[1877]: Listen normally on 11 cali58afbb977a8 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 12 17:28:23.561787 ntpd[1877]: 12 Dec 17:28:23 ntpd[1877]: Listen normally on 12 cali6b52c6fba2b [fe80::ecee:eeff:feee:eeee%11]:123 Dec 12 17:28:23.561787 ntpd[1877]: 12 Dec 17:28:23 ntpd[1877]: Listen normally on 13 calia05a57c583c [fe80::ecee:eeff:feee:eeee%12]:123 Dec 12 17:28:23.561787 ntpd[1877]: 12 Dec 17:28:23 ntpd[1877]: Listen normally on 14 cali5f5b02f0c57 [fe80::ecee:eeff:feee:eeee%13]:123 Dec 12 17:28:23.561787 ntpd[1877]: 12 Dec 17:28:23 ntpd[1877]: Listen normally on 15 cali72cb29f0745 [fe80::ecee:eeff:feee:eeee%14]:123 Dec 12 17:28:23.561246 ntpd[1877]: Listen normally on 8 vxlan.calico [fe80::64e8:95ff:feff:fb88%5]:123 Dec 12 17:28:23.561293 ntpd[1877]: Listen normally on 9 cali8e50f429e53 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 12 17:28:23.561340 ntpd[1877]: Listen normally on 10 cali0fb89285d33 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 12 17:28:23.561392 ntpd[1877]: Listen normally on 11 cali58afbb977a8 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 12 17:28:23.561522 ntpd[1877]: Listen normally on 12 cali6b52c6fba2b [fe80::ecee:eeff:feee:eeee%11]:123 Dec 12 17:28:23.561578 ntpd[1877]: Listen normally on 13 calia05a57c583c [fe80::ecee:eeff:feee:eeee%12]:123 Dec 12 17:28:23.561626 ntpd[1877]: Listen normally on 14 cali5f5b02f0c57 [fe80::ecee:eeff:feee:eeee%13]:123 Dec 12 17:28:23.561722 ntpd[1877]: Listen normally on 15 cali72cb29f0745 [fe80::ecee:eeff:feee:eeee%14]:123 Dec 12 17:28:25.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.17.228:22-139.178.68.195:48870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:25.947799 systemd[1]: Started sshd@7-172.31.17.228:22-139.178.68.195:48870.service - OpenSSH per-connection server daemon (139.178.68.195:48870). Dec 12 17:28:26.152481 kernel: kauditd_printk_skb: 15 callbacks suppressed Dec 12 17:28:26.152628 kernel: audit: type=1101 audit(1765560506.144:763): pid=5599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:26.144000 audit[5599]: USER_ACCT pid=5599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:26.152831 sshd[5599]: Accepted publickey for core from 139.178.68.195 port 48870 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:28:26.152000 audit[5599]: CRED_ACQ pid=5599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:26.154359 kernel: audit: type=1103 audit(1765560506.152:764): pid=5599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:26.155976 sshd-session[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:26.162786 kernel: audit: type=1006 audit(1765560506.153:765): pid=5599 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Dec 12 17:28:26.163758 kernel: audit: type=1300 audit(1765560506.153:765): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd138ad70 a2=3 a3=0 items=0 ppid=1 pid=5599 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:26.153000 audit[5599]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd138ad70 a2=3 a3=0 items=0 ppid=1 pid=5599 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:26.153000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:26.173127 kernel: audit: type=1327 audit(1765560506.153:765): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:26.178897 systemd-logind[1883]: New session 8 of user core. Dec 12 17:28:26.185009 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 17:28:26.190000 audit[5599]: USER_START pid=5599 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:26.198994 kernel: audit: type=1105 audit(1765560506.190:766): pid=5599 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:26.199141 kernel: audit: type=1103 audit(1765560506.198:767): pid=5602 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:26.198000 audit[5602]: CRED_ACQ pid=5602 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:26.440048 sshd[5602]: Connection closed by 139.178.68.195 port 48870 Dec 12 17:28:26.441007 sshd-session[5599]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:26.444000 audit[5599]: USER_END pid=5599 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:26.452887 systemd[1]: sshd@7-172.31.17.228:22-139.178.68.195:48870.service: Deactivated successfully. Dec 12 17:28:26.445000 audit[5599]: CRED_DISP pid=5599 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:26.460138 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 17:28:26.464173 kernel: audit: type=1106 audit(1765560506.444:768): pid=5599 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:26.464540 kernel: audit: type=1104 audit(1765560506.445:769): pid=5599 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:26.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.17.228:22-139.178.68.195:48870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:26.466073 systemd-logind[1883]: Session 8 logged out. Waiting for processes to exit. Dec 12 17:28:26.470654 kernel: audit: type=1131 audit(1765560506.453:770): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.17.228:22-139.178.68.195:48870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:26.475029 systemd-logind[1883]: Removed session 8. Dec 12 17:28:29.326735 containerd[1911]: time="2025-12-12T17:28:29.325952099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:28:29.596397 containerd[1911]: time="2025-12-12T17:28:29.596152128Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:29.598555 containerd[1911]: time="2025-12-12T17:28:29.598484844Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:28:29.599035 containerd[1911]: time="2025-12-12T17:28:29.598559220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:29.599505 kubelet[3528]: E1212 17:28:29.599427 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:28:29.600632 kubelet[3528]: E1212 17:28:29.600142 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:28:29.600632 kubelet[3528]: E1212 17:28:29.600493 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9f0be7f5aecb4214a695a0df6daf94fa,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8zbvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b7d45d746-ssppz_calico-system(c3f845e7-b0d1-412d-aad9-3771e0979bfc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:29.605186 containerd[1911]: time="2025-12-12T17:28:29.604272420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:28:29.859319 containerd[1911]: time="2025-12-12T17:28:29.859138729Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:29.862045 containerd[1911]: time="2025-12-12T17:28:29.861885757Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:28:29.862246 containerd[1911]: time="2025-12-12T17:28:29.861955957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:29.862573 kubelet[3528]: E1212 17:28:29.862497 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:28:29.863817 kubelet[3528]: E1212 17:28:29.862572 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:28:29.863817 kubelet[3528]: E1212 17:28:29.862864 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8zbvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b7d45d746-ssppz_calico-system(c3f845e7-b0d1-412d-aad9-3771e0979bfc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:29.865214 kubelet[3528]: E1212 17:28:29.865069 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7d45d746-ssppz" podUID="c3f845e7-b0d1-412d-aad9-3771e0979bfc" Dec 12 17:28:30.327972 containerd[1911]: time="2025-12-12T17:28:30.327545064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:28:30.629267 containerd[1911]: time="2025-12-12T17:28:30.629189089Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:30.631527 containerd[1911]: time="2025-12-12T17:28:30.631447429Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:28:30.631652 containerd[1911]: time="2025-12-12T17:28:30.631565401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:30.632009 kubelet[3528]: E1212 17:28:30.631920 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:28:30.633063 kubelet[3528]: E1212 17:28:30.632005 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:28:30.633161 containerd[1911]: time="2025-12-12T17:28:30.632432377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:28:30.633921 kubelet[3528]: E1212 17:28:30.632575 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pq5rm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6755b5785f-427k6_calico-system(b11dd849-e38b-40e1-a2d3-0061a9f777d6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:30.635238 kubelet[3528]: E1212 17:28:30.635064 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6755b5785f-427k6" podUID="b11dd849-e38b-40e1-a2d3-0061a9f777d6" Dec 12 17:28:30.893906 containerd[1911]: time="2025-12-12T17:28:30.889306214Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:30.893906 containerd[1911]: time="2025-12-12T17:28:30.892514162Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:28:30.893906 containerd[1911]: time="2025-12-12T17:28:30.892552442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:30.894300 kubelet[3528]: E1212 17:28:30.892831 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:28:30.894300 kubelet[3528]: E1212 17:28:30.892889 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:28:30.894300 kubelet[3528]: E1212 17:28:30.893055 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzhvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5phxm_calico-system(5ad89cf6-178c-4c89-9906-56d3d4e0dba0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:30.898023 containerd[1911]: time="2025-12-12T17:28:30.897887306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:28:31.212211 containerd[1911]: time="2025-12-12T17:28:31.212058204Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:31.214848 containerd[1911]: time="2025-12-12T17:28:31.214727844Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:31.214848 containerd[1911]: time="2025-12-12T17:28:31.214760172Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:28:31.215617 kubelet[3528]: E1212 17:28:31.215149 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:28:31.215617 kubelet[3528]: E1212 17:28:31.215210 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:28:31.215617 kubelet[3528]: E1212 17:28:31.215388 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzhvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5phxm_calico-system(5ad89cf6-178c-4c89-9906-56d3d4e0dba0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:31.217320 kubelet[3528]: E1212 17:28:31.217230 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5phxm" podUID="5ad89cf6-178c-4c89-9906-56d3d4e0dba0" Dec 12 17:28:31.332282 containerd[1911]: time="2025-12-12T17:28:31.332111425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:28:31.485120 systemd[1]: Started sshd@8-172.31.17.228:22-139.178.68.195:35788.service - OpenSSH per-connection server daemon (139.178.68.195:35788). Dec 12 17:28:31.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.17.228:22-139.178.68.195:35788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:31.492721 kernel: audit: type=1130 audit(1765560511.485:771): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.17.228:22-139.178.68.195:35788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:31.597324 containerd[1911]: time="2025-12-12T17:28:31.597107282Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:31.599413 containerd[1911]: time="2025-12-12T17:28:31.599278970Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:28:31.599585 containerd[1911]: time="2025-12-12T17:28:31.599382170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:31.599652 kubelet[3528]: E1212 17:28:31.599592 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:31.599767 kubelet[3528]: E1212 17:28:31.599735 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:31.600431 kubelet[3528]: E1212 17:28:31.600000 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jc92w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b5446659b-7cz6s_calico-apiserver(464adca8-6b09-4273-9180-6050c84a6f28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:31.601785 kubelet[3528]: E1212 17:28:31.601719 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5446659b-7cz6s" podUID="464adca8-6b09-4273-9180-6050c84a6f28" Dec 12 17:28:31.683000 audit[5617]: USER_ACCT pid=5617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:31.685897 sshd[5617]: Accepted publickey for core from 139.178.68.195 port 35788 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:28:31.691480 kernel: audit: type=1101 audit(1765560511.683:772): pid=5617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:31.691000 audit[5617]: CRED_ACQ pid=5617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:31.695938 sshd-session[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:31.702771 kernel: audit: type=1103 audit(1765560511.691:773): pid=5617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:31.702882 kernel: audit: type=1006 audit(1765560511.694:774): pid=5617 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 12 17:28:31.694000 audit[5617]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd3d3da30 a2=3 a3=0 items=0 ppid=1 pid=5617 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:31.710857 kernel: audit: type=1300 audit(1765560511.694:774): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd3d3da30 a2=3 a3=0 items=0 ppid=1 pid=5617 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:31.694000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:31.715203 kernel: audit: type=1327 audit(1765560511.694:774): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:31.724357 systemd-logind[1883]: New session 9 of user core. Dec 12 17:28:31.730547 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 17:28:31.746000 audit[5617]: USER_START pid=5617 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:31.765713 kernel: audit: type=1105 audit(1765560511.746:775): pid=5617 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:31.765861 kernel: audit: type=1103 audit(1765560511.759:776): pid=5620 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:31.759000 audit[5620]: CRED_ACQ pid=5620 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:31.968410 sshd[5620]: Connection closed by 139.178.68.195 port 35788 Dec 12 17:28:31.969416 sshd-session[5617]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:31.971000 audit[5617]: USER_END pid=5617 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:31.976572 systemd-logind[1883]: Session 9 logged out. Waiting for processes to exit. Dec 12 17:28:31.971000 audit[5617]: CRED_DISP pid=5617 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:31.981780 systemd[1]: sshd@8-172.31.17.228:22-139.178.68.195:35788.service: Deactivated successfully. Dec 12 17:28:31.985974 kernel: audit: type=1106 audit(1765560511.971:777): pid=5617 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:31.987706 kernel: audit: type=1104 audit(1765560511.971:778): pid=5617 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:31.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.17.228:22-139.178.68.195:35788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:31.989351 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 17:28:31.994455 systemd-logind[1883]: Removed session 9. Dec 12 17:28:33.327914 containerd[1911]: time="2025-12-12T17:28:33.327165987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:28:33.594481 containerd[1911]: time="2025-12-12T17:28:33.594405400Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:33.597192 containerd[1911]: time="2025-12-12T17:28:33.596921068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:33.597192 containerd[1911]: time="2025-12-12T17:28:33.596929540Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:28:33.597689 kubelet[3528]: E1212 17:28:33.597576 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:28:33.599652 kubelet[3528]: E1212 17:28:33.597651 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:28:33.599652 kubelet[3528]: E1212 17:28:33.597884 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v6zf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fxswz_calico-system(4993f81d-df62-4e56-b3d3-f820e3c156d6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:33.600139 kubelet[3528]: E1212 17:28:33.600040 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fxswz" podUID="4993f81d-df62-4e56-b3d3-f820e3c156d6" Dec 12 17:28:34.324504 containerd[1911]: time="2025-12-12T17:28:34.324427083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:28:34.623642 containerd[1911]: time="2025-12-12T17:28:34.623426801Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:34.625820 containerd[1911]: time="2025-12-12T17:28:34.625616945Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:28:34.625820 containerd[1911]: time="2025-12-12T17:28:34.625739981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:34.626566 kubelet[3528]: E1212 17:28:34.626210 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:34.626566 kubelet[3528]: E1212 17:28:34.626276 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:34.626566 kubelet[3528]: E1212 17:28:34.626482 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvxv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b5446659b-48cdq_calico-apiserver(4f54519e-b15c-42cf-aa0f-f8649bda1c94): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:34.628787 kubelet[3528]: E1212 17:28:34.628643 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5446659b-48cdq" podUID="4f54519e-b15c-42cf-aa0f-f8649bda1c94" Dec 12 17:28:37.007539 systemd[1]: Started sshd@9-172.31.17.228:22-139.178.68.195:35794.service - OpenSSH per-connection server daemon (139.178.68.195:35794). Dec 12 17:28:37.015509 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:37.015643 kernel: audit: type=1130 audit(1765560517.007:780): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.17.228:22-139.178.68.195:35794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:37.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.17.228:22-139.178.68.195:35794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:37.201000 audit[5639]: USER_ACCT pid=5639 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:37.202179 sshd[5639]: Accepted publickey for core from 139.178.68.195 port 35794 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:28:37.209765 kernel: audit: type=1101 audit(1765560517.201:781): pid=5639 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:37.209000 audit[5639]: CRED_ACQ pid=5639 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:37.211349 sshd-session[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:37.219569 kernel: audit: type=1103 audit(1765560517.209:782): pid=5639 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:37.219748 kernel: audit: type=1006 audit(1765560517.209:783): pid=5639 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 12 17:28:37.220019 kernel: audit: type=1300 audit(1765560517.209:783): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffed9ced10 a2=3 a3=0 items=0 ppid=1 pid=5639 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:37.209000 audit[5639]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffed9ced10 a2=3 a3=0 items=0 ppid=1 pid=5639 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:37.209000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:37.228763 kernel: audit: type=1327 audit(1765560517.209:783): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:37.236398 systemd-logind[1883]: New session 10 of user core. Dec 12 17:28:37.242091 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 17:28:37.248000 audit[5639]: USER_START pid=5639 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:37.255000 audit[5642]: CRED_ACQ pid=5642 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:37.261669 kernel: audit: type=1105 audit(1765560517.248:784): pid=5639 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:37.261816 kernel: audit: type=1103 audit(1765560517.255:785): pid=5642 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:37.469875 sshd[5642]: Connection closed by 139.178.68.195 port 35794 Dec 12 17:28:37.471484 sshd-session[5639]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:37.473000 audit[5639]: USER_END pid=5639 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:37.483129 systemd[1]: sshd@9-172.31.17.228:22-139.178.68.195:35794.service: Deactivated successfully. Dec 12 17:28:37.474000 audit[5639]: CRED_DISP pid=5639 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:37.483755 kernel: audit: type=1106 audit(1765560517.473:786): pid=5639 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:37.489472 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 17:28:37.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.17.228:22-139.178.68.195:35794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:37.492865 kernel: audit: type=1104 audit(1765560517.474:787): pid=5639 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:37.495512 systemd-logind[1883]: Session 10 logged out. Waiting for processes to exit. Dec 12 17:28:37.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.17.228:22-139.178.68.195:35808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:37.513244 systemd[1]: Started sshd@10-172.31.17.228:22-139.178.68.195:35808.service - OpenSSH per-connection server daemon (139.178.68.195:35808). Dec 12 17:28:37.517411 systemd-logind[1883]: Removed session 10. Dec 12 17:28:37.693000 audit[5654]: USER_ACCT pid=5654 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:37.694382 sshd[5654]: Accepted publickey for core from 139.178.68.195 port 35808 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:28:37.695000 audit[5654]: CRED_ACQ pid=5654 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:37.695000 audit[5654]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffb778560 a2=3 a3=0 items=0 ppid=1 pid=5654 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:37.695000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:37.696895 sshd-session[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:37.708950 systemd-logind[1883]: New session 11 of user core. Dec 12 17:28:37.724043 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 17:28:37.732000 audit[5654]: USER_START pid=5654 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:37.736000 audit[5657]: CRED_ACQ pid=5657 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:38.023551 sshd[5657]: Connection closed by 139.178.68.195 port 35808 Dec 12 17:28:38.050897 sshd-session[5654]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:38.054000 audit[5654]: USER_END pid=5654 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:38.055000 audit[5654]: CRED_DISP pid=5654 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:38.067096 systemd[1]: sshd@10-172.31.17.228:22-139.178.68.195:35808.service: Deactivated successfully. Dec 12 17:28:38.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.17.228:22-139.178.68.195:35808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:38.077469 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 17:28:38.081511 systemd-logind[1883]: Session 11 logged out. Waiting for processes to exit. Dec 12 17:28:38.090567 systemd[1]: Started sshd@11-172.31.17.228:22-139.178.68.195:35820.service - OpenSSH per-connection server daemon (139.178.68.195:35820). Dec 12 17:28:38.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.17.228:22-139.178.68.195:35820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:38.093402 systemd-logind[1883]: Removed session 11. Dec 12 17:28:38.297000 audit[5667]: USER_ACCT pid=5667 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:38.298735 sshd[5667]: Accepted publickey for core from 139.178.68.195 port 35820 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:28:38.299000 audit[5667]: CRED_ACQ pid=5667 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:38.299000 audit[5667]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe643fb40 a2=3 a3=0 items=0 ppid=1 pid=5667 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:38.299000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:38.301082 sshd-session[5667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:38.310874 systemd-logind[1883]: New session 12 of user core. Dec 12 17:28:38.325047 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 17:28:38.330000 audit[5667]: USER_START pid=5667 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:38.334000 audit[5670]: CRED_ACQ pid=5670 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:38.550513 sshd[5670]: Connection closed by 139.178.68.195 port 35820 Dec 12 17:28:38.551798 sshd-session[5667]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:38.556000 audit[5667]: USER_END pid=5667 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:38.556000 audit[5667]: CRED_DISP pid=5667 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:38.562522 systemd[1]: sshd@11-172.31.17.228:22-139.178.68.195:35820.service: Deactivated successfully. Dec 12 17:28:38.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.17.228:22-139.178.68.195:35820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:38.568207 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 17:28:38.573541 systemd-logind[1883]: Session 12 logged out. Waiting for processes to exit. Dec 12 17:28:38.577480 systemd-logind[1883]: Removed session 12. Dec 12 17:28:43.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.17.228:22-139.178.68.195:56242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:43.596170 systemd[1]: Started sshd@12-172.31.17.228:22-139.178.68.195:56242.service - OpenSSH per-connection server daemon (139.178.68.195:56242). Dec 12 17:28:43.598690 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 12 17:28:43.598830 kernel: audit: type=1130 audit(1765560523.595:807): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.17.228:22-139.178.68.195:56242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:43.795000 audit[5713]: USER_ACCT pid=5713 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:43.796477 sshd[5713]: Accepted publickey for core from 139.178.68.195 port 56242 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:28:43.802767 kernel: audit: type=1101 audit(1765560523.795:808): pid=5713 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:43.802000 audit[5713]: CRED_ACQ pid=5713 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:43.804411 sshd-session[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:43.812761 kernel: audit: type=1103 audit(1765560523.802:809): pid=5713 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:43.813007 kernel: audit: type=1006 audit(1765560523.803:810): pid=5713 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 12 17:28:43.803000 audit[5713]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff6dde1f0 a2=3 a3=0 items=0 ppid=1 pid=5713 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:43.819278 kernel: audit: type=1300 audit(1765560523.803:810): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff6dde1f0 a2=3 a3=0 items=0 ppid=1 pid=5713 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:43.803000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:43.822055 kernel: audit: type=1327 audit(1765560523.803:810): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:43.830824 systemd-logind[1883]: New session 13 of user core. Dec 12 17:28:43.835058 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 17:28:43.842000 audit[5713]: USER_START pid=5713 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:43.852033 kernel: audit: type=1105 audit(1765560523.842:811): pid=5713 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:43.852175 kernel: audit: type=1103 audit(1765560523.850:812): pid=5717 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:43.850000 audit[5717]: CRED_ACQ pid=5717 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:44.057573 sshd[5717]: Connection closed by 139.178.68.195 port 56242 Dec 12 17:28:44.058589 sshd-session[5713]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:44.061000 audit[5713]: USER_END pid=5713 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:44.068240 systemd[1]: sshd@12-172.31.17.228:22-139.178.68.195:56242.service: Deactivated successfully. Dec 12 17:28:44.061000 audit[5713]: CRED_DISP pid=5713 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:44.078140 kernel: audit: type=1106 audit(1765560524.061:813): pid=5713 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:44.078278 kernel: audit: type=1104 audit(1765560524.061:814): pid=5713 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:44.073374 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 17:28:44.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.17.228:22-139.178.68.195:56242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:44.080475 systemd-logind[1883]: Session 13 logged out. Waiting for processes to exit. Dec 12 17:28:44.084830 systemd-logind[1883]: Removed session 13. Dec 12 17:28:44.327255 kubelet[3528]: E1212 17:28:44.326319 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7d45d746-ssppz" podUID="c3f845e7-b0d1-412d-aad9-3771e0979bfc" Dec 12 17:28:45.329228 kubelet[3528]: E1212 17:28:45.328616 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6755b5785f-427k6" podUID="b11dd849-e38b-40e1-a2d3-0061a9f777d6" Dec 12 17:28:45.333583 kubelet[3528]: E1212 17:28:45.333508 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5446659b-7cz6s" podUID="464adca8-6b09-4273-9180-6050c84a6f28" Dec 12 17:28:45.337702 kubelet[3528]: E1212 17:28:45.337420 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5phxm" podUID="5ad89cf6-178c-4c89-9906-56d3d4e0dba0" Dec 12 17:28:46.325877 kubelet[3528]: E1212 17:28:46.325154 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fxswz" podUID="4993f81d-df62-4e56-b3d3-f820e3c156d6" Dec 12 17:28:46.326686 kubelet[3528]: E1212 17:28:46.326445 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5446659b-48cdq" podUID="4f54519e-b15c-42cf-aa0f-f8649bda1c94" Dec 12 17:28:49.098312 systemd[1]: Started sshd@13-172.31.17.228:22-139.178.68.195:56256.service - OpenSSH per-connection server daemon (139.178.68.195:56256). Dec 12 17:28:49.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.17.228:22-139.178.68.195:56256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:49.100034 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:49.100163 kernel: audit: type=1130 audit(1765560529.097:816): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.17.228:22-139.178.68.195:56256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:49.290000 audit[5733]: USER_ACCT pid=5733 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:49.294484 sshd[5733]: Accepted publickey for core from 139.178.68.195 port 56256 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:28:49.298111 kernel: audit: type=1101 audit(1765560529.290:817): pid=5733 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:49.297000 audit[5733]: CRED_ACQ pid=5733 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:49.300419 sshd-session[5733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:49.308528 kernel: audit: type=1103 audit(1765560529.297:818): pid=5733 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:49.309343 kernel: audit: type=1006 audit(1765560529.297:819): pid=5733 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 12 17:28:49.297000 audit[5733]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff09a1c20 a2=3 a3=0 items=0 ppid=1 pid=5733 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:49.317444 kernel: audit: type=1300 audit(1765560529.297:819): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff09a1c20 a2=3 a3=0 items=0 ppid=1 pid=5733 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:49.317598 kernel: audit: type=1327 audit(1765560529.297:819): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:49.297000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:49.319429 systemd-logind[1883]: New session 14 of user core. Dec 12 17:28:49.329082 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 17:28:49.337000 audit[5733]: USER_START pid=5733 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:49.344000 audit[5736]: CRED_ACQ pid=5736 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:49.350688 kernel: audit: type=1105 audit(1765560529.337:820): pid=5733 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:49.350803 kernel: audit: type=1103 audit(1765560529.344:821): pid=5736 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:49.562773 sshd[5736]: Connection closed by 139.178.68.195 port 56256 Dec 12 17:28:49.564021 sshd-session[5733]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:49.567000 audit[5733]: USER_END pid=5733 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:49.573012 systemd[1]: sshd@13-172.31.17.228:22-139.178.68.195:56256.service: Deactivated successfully. Dec 12 17:28:49.573645 systemd-logind[1883]: Session 14 logged out. Waiting for processes to exit. Dec 12 17:28:49.567000 audit[5733]: CRED_DISP pid=5733 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:49.583519 kernel: audit: type=1106 audit(1765560529.567:822): pid=5733 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:49.583688 kernel: audit: type=1104 audit(1765560529.567:823): pid=5733 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:49.582348 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 17:28:49.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.17.228:22-139.178.68.195:56256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:49.589166 systemd-logind[1883]: Removed session 14. Dec 12 17:28:54.616015 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:54.616153 kernel: audit: type=1130 audit(1765560534.608:825): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.17.228:22-139.178.68.195:44148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:54.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.17.228:22-139.178.68.195:44148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:54.608366 systemd[1]: Started sshd@14-172.31.17.228:22-139.178.68.195:44148.service - OpenSSH per-connection server daemon (139.178.68.195:44148). Dec 12 17:28:54.827000 audit[5748]: USER_ACCT pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:54.834213 sshd[5748]: Accepted publickey for core from 139.178.68.195 port 44148 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:28:54.836326 sshd-session[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:54.834000 audit[5748]: CRED_ACQ pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:54.842747 kernel: audit: type=1101 audit(1765560534.827:826): pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:54.842888 kernel: audit: type=1103 audit(1765560534.834:827): pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:54.844718 kernel: audit: type=1006 audit(1765560534.834:828): pid=5748 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 12 17:28:54.834000 audit[5748]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffef9d2460 a2=3 a3=0 items=0 ppid=1 pid=5748 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:54.853967 kernel: audit: type=1300 audit(1765560534.834:828): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffef9d2460 a2=3 a3=0 items=0 ppid=1 pid=5748 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:54.834000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:54.857290 kernel: audit: type=1327 audit(1765560534.834:828): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:54.865850 systemd-logind[1883]: New session 15 of user core. Dec 12 17:28:54.874019 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 17:28:54.883000 audit[5748]: USER_START pid=5748 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:54.890000 audit[5751]: CRED_ACQ pid=5751 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:54.899135 kernel: audit: type=1105 audit(1765560534.883:829): pid=5748 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:54.899277 kernel: audit: type=1103 audit(1765560534.890:830): pid=5751 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:55.124410 sshd[5751]: Connection closed by 139.178.68.195 port 44148 Dec 12 17:28:55.126020 sshd-session[5748]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:55.129000 audit[5748]: USER_END pid=5748 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:55.143653 systemd-logind[1883]: Session 15 logged out. Waiting for processes to exit. Dec 12 17:28:55.143909 systemd[1]: sshd@14-172.31.17.228:22-139.178.68.195:44148.service: Deactivated successfully. Dec 12 17:28:55.137000 audit[5748]: CRED_DISP pid=5748 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:55.152741 kernel: audit: type=1106 audit(1765560535.129:831): pid=5748 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:55.152868 kernel: audit: type=1104 audit(1765560535.137:832): pid=5748 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:55.151326 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 17:28:55.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.17.228:22-139.178.68.195:44148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:55.160588 systemd-logind[1883]: Removed session 15. Dec 12 17:28:56.324700 containerd[1911]: time="2025-12-12T17:28:56.324123733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:28:56.600562 containerd[1911]: time="2025-12-12T17:28:56.600485426Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:56.602920 containerd[1911]: time="2025-12-12T17:28:56.602698538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:56.602920 containerd[1911]: time="2025-12-12T17:28:56.602699222Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:28:56.603617 kubelet[3528]: E1212 17:28:56.603115 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:28:56.603617 kubelet[3528]: E1212 17:28:56.603177 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:28:56.603617 kubelet[3528]: E1212 17:28:56.603530 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9f0be7f5aecb4214a695a0df6daf94fa,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8zbvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b7d45d746-ssppz_calico-system(c3f845e7-b0d1-412d-aad9-3771e0979bfc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:56.605408 containerd[1911]: time="2025-12-12T17:28:56.604315550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:28:56.874506 containerd[1911]: time="2025-12-12T17:28:56.874330167Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:56.876726 containerd[1911]: time="2025-12-12T17:28:56.876632463Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:28:56.876866 containerd[1911]: time="2025-12-12T17:28:56.876788379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:56.877098 kubelet[3528]: E1212 17:28:56.877041 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:28:56.877227 kubelet[3528]: E1212 17:28:56.877113 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:28:56.877491 kubelet[3528]: E1212 17:28:56.877405 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzhvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5phxm_calico-system(5ad89cf6-178c-4c89-9906-56d3d4e0dba0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:56.878896 containerd[1911]: time="2025-12-12T17:28:56.878285595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:28:57.176051 containerd[1911]: time="2025-12-12T17:28:57.175897105Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:57.178375 containerd[1911]: time="2025-12-12T17:28:57.178084717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:57.178375 containerd[1911]: time="2025-12-12T17:28:57.178093081Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:28:57.178649 kubelet[3528]: E1212 17:28:57.178564 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:28:57.178800 kubelet[3528]: E1212 17:28:57.178699 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:28:57.179099 kubelet[3528]: E1212 17:28:57.179016 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8zbvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b7d45d746-ssppz_calico-system(c3f845e7-b0d1-412d-aad9-3771e0979bfc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:57.181147 kubelet[3528]: E1212 17:28:57.181063 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7d45d746-ssppz" podUID="c3f845e7-b0d1-412d-aad9-3771e0979bfc" Dec 12 17:28:57.181607 containerd[1911]: time="2025-12-12T17:28:57.181554613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:28:57.506613 containerd[1911]: time="2025-12-12T17:28:57.506434215Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:57.508881 containerd[1911]: time="2025-12-12T17:28:57.508800735Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:28:57.509153 containerd[1911]: time="2025-12-12T17:28:57.508930215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:57.509485 kubelet[3528]: E1212 17:28:57.509436 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:28:57.509684 kubelet[3528]: E1212 17:28:57.509628 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:28:57.510501 containerd[1911]: time="2025-12-12T17:28:57.510173907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:28:57.510822 kubelet[3528]: E1212 17:28:57.510304 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzhvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5phxm_calico-system(5ad89cf6-178c-4c89-9906-56d3d4e0dba0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:57.512737 kubelet[3528]: E1212 17:28:57.512405 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5phxm" podUID="5ad89cf6-178c-4c89-9906-56d3d4e0dba0" Dec 12 17:28:57.803598 containerd[1911]: time="2025-12-12T17:28:57.803264944Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:57.805753 containerd[1911]: time="2025-12-12T17:28:57.805553716Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:28:57.805753 containerd[1911]: time="2025-12-12T17:28:57.805630900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:57.806210 kubelet[3528]: E1212 17:28:57.806128 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:28:57.806883 kubelet[3528]: E1212 17:28:57.806224 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:28:57.806883 kubelet[3528]: E1212 17:28:57.806645 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pq5rm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6755b5785f-427k6_calico-system(b11dd849-e38b-40e1-a2d3-0061a9f777d6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:57.808846 kubelet[3528]: E1212 17:28:57.808765 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6755b5785f-427k6" podUID="b11dd849-e38b-40e1-a2d3-0061a9f777d6" Dec 12 17:28:59.327387 containerd[1911]: time="2025-12-12T17:28:59.326312380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:28:59.634194 containerd[1911]: time="2025-12-12T17:28:59.634118285Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:59.636253 containerd[1911]: time="2025-12-12T17:28:59.636170129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:28:59.636512 containerd[1911]: time="2025-12-12T17:28:59.636213101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:59.636571 kubelet[3528]: E1212 17:28:59.636469 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:59.636571 kubelet[3528]: E1212 17:28:59.636533 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:59.637159 kubelet[3528]: E1212 17:28:59.636883 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jc92w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b5446659b-7cz6s_calico-apiserver(464adca8-6b09-4273-9180-6050c84a6f28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:59.638689 kubelet[3528]: E1212 17:28:59.638589 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5446659b-7cz6s" podUID="464adca8-6b09-4273-9180-6050c84a6f28" Dec 12 17:29:00.162575 systemd[1]: Started sshd@15-172.31.17.228:22-139.178.68.195:51426.service - OpenSSH per-connection server daemon (139.178.68.195:51426). Dec 12 17:29:00.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.17.228:22-139.178.68.195:51426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:00.164685 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:29:00.164791 kernel: audit: type=1130 audit(1765560540.162:834): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.17.228:22-139.178.68.195:51426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:00.330200 containerd[1911]: time="2025-12-12T17:29:00.328653101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:29:00.347000 audit[5771]: USER_ACCT pid=5771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:00.356770 kernel: audit: type=1101 audit(1765560540.347:835): pid=5771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:00.356904 sshd[5771]: Accepted publickey for core from 139.178.68.195 port 51426 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:29:00.361000 audit[5771]: CRED_ACQ pid=5771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:00.368750 sshd-session[5771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:00.372940 kernel: audit: type=1103 audit(1765560540.361:836): pid=5771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:00.373586 kernel: audit: type=1006 audit(1765560540.367:837): pid=5771 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 12 17:29:00.367000 audit[5771]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd96eaeb0 a2=3 a3=0 items=0 ppid=1 pid=5771 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:00.381355 kernel: audit: type=1300 audit(1765560540.367:837): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd96eaeb0 a2=3 a3=0 items=0 ppid=1 pid=5771 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:00.367000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:00.384116 kernel: audit: type=1327 audit(1765560540.367:837): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:00.392012 systemd-logind[1883]: New session 16 of user core. Dec 12 17:29:00.400018 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 17:29:00.405000 audit[5771]: USER_START pid=5771 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:00.413000 audit[5776]: CRED_ACQ pid=5776 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:00.420143 kernel: audit: type=1105 audit(1765560540.405:838): pid=5771 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:00.420303 kernel: audit: type=1103 audit(1765560540.413:839): pid=5776 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:00.610300 containerd[1911]: time="2025-12-12T17:29:00.610212654Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:00.612429 containerd[1911]: time="2025-12-12T17:29:00.612347202Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:29:00.613391 containerd[1911]: time="2025-12-12T17:29:00.612556290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:00.613596 kubelet[3528]: E1212 17:29:00.612887 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:29:00.613596 kubelet[3528]: E1212 17:29:00.612957 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:29:00.613596 kubelet[3528]: E1212 17:29:00.613218 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvxv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b5446659b-48cdq_calico-apiserver(4f54519e-b15c-42cf-aa0f-f8649bda1c94): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:00.614975 kubelet[3528]: E1212 17:29:00.614845 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5446659b-48cdq" podUID="4f54519e-b15c-42cf-aa0f-f8649bda1c94" Dec 12 17:29:00.669829 sshd[5776]: Connection closed by 139.178.68.195 port 51426 Dec 12 17:29:00.671230 sshd-session[5771]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:00.674000 audit[5771]: USER_END pid=5771 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:00.677000 audit[5771]: CRED_DISP pid=5771 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:00.682821 systemd[1]: sshd@15-172.31.17.228:22-139.178.68.195:51426.service: Deactivated successfully. Dec 12 17:29:00.688323 kernel: audit: type=1106 audit(1765560540.674:840): pid=5771 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:00.688490 kernel: audit: type=1104 audit(1765560540.677:841): pid=5771 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:00.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.17.228:22-139.178.68.195:51426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:00.689847 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 17:29:00.693487 systemd-logind[1883]: Session 16 logged out. Waiting for processes to exit. Dec 12 17:29:00.712638 systemd[1]: Started sshd@16-172.31.17.228:22-139.178.68.195:51442.service - OpenSSH per-connection server daemon (139.178.68.195:51442). Dec 12 17:29:00.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.17.228:22-139.178.68.195:51442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:00.716508 systemd-logind[1883]: Removed session 16. Dec 12 17:29:00.920000 audit[5788]: USER_ACCT pid=5788 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:00.921988 sshd[5788]: Accepted publickey for core from 139.178.68.195 port 51442 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:29:00.923000 audit[5788]: CRED_ACQ pid=5788 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:00.924000 audit[5788]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffbbb5290 a2=3 a3=0 items=0 ppid=1 pid=5788 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:00.924000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:00.925716 sshd-session[5788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:00.943498 systemd-logind[1883]: New session 17 of user core. Dec 12 17:29:00.952134 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 17:29:00.958000 audit[5788]: USER_START pid=5788 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:00.963000 audit[5791]: CRED_ACQ pid=5791 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:01.328553 containerd[1911]: time="2025-12-12T17:29:01.328372506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:29:01.616211 containerd[1911]: time="2025-12-12T17:29:01.615991579Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:01.618381 containerd[1911]: time="2025-12-12T17:29:01.618242287Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:29:01.618381 containerd[1911]: time="2025-12-12T17:29:01.618303235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:01.618786 kubelet[3528]: E1212 17:29:01.618736 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:29:01.619726 kubelet[3528]: E1212 17:29:01.619335 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:29:01.619726 kubelet[3528]: E1212 17:29:01.619571 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v6zf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fxswz_calico-system(4993f81d-df62-4e56-b3d3-f820e3c156d6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:01.620925 kubelet[3528]: E1212 17:29:01.620854 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fxswz" podUID="4993f81d-df62-4e56-b3d3-f820e3c156d6" Dec 12 17:29:02.240908 sshd[5791]: Connection closed by 139.178.68.195 port 51442 Dec 12 17:29:02.241305 sshd-session[5788]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:02.244000 audit[5788]: USER_END pid=5788 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:02.245000 audit[5788]: CRED_DISP pid=5788 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:02.251169 systemd[1]: sshd@16-172.31.17.228:22-139.178.68.195:51442.service: Deactivated successfully. Dec 12 17:29:02.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.17.228:22-139.178.68.195:51442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:02.258013 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 17:29:02.262327 systemd-logind[1883]: Session 17 logged out. Waiting for processes to exit. Dec 12 17:29:02.278311 systemd[1]: Started sshd@17-172.31.17.228:22-139.178.68.195:51458.service - OpenSSH per-connection server daemon (139.178.68.195:51458). Dec 12 17:29:02.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.17.228:22-139.178.68.195:51458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:02.281095 systemd-logind[1883]: Removed session 17. Dec 12 17:29:02.472000 audit[5801]: USER_ACCT pid=5801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:02.474117 sshd[5801]: Accepted publickey for core from 139.178.68.195 port 51458 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:29:02.475000 audit[5801]: CRED_ACQ pid=5801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:02.475000 audit[5801]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe691e8c0 a2=3 a3=0 items=0 ppid=1 pid=5801 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:02.475000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:02.476960 sshd-session[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:02.487071 systemd-logind[1883]: New session 18 of user core. Dec 12 17:29:02.495018 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 17:29:02.501000 audit[5801]: USER_START pid=5801 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:02.505000 audit[5804]: CRED_ACQ pid=5804 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:03.703000 audit[5818]: NETFILTER_CFG table=filter:145 family=2 entries=26 op=nft_register_rule pid=5818 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:29:03.703000 audit[5818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffdd1287c0 a2=0 a3=1 items=0 ppid=3675 pid=5818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:03.703000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:29:03.711000 audit[5818]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=5818 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:29:03.711000 audit[5818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffdd1287c0 a2=0 a3=1 items=0 ppid=3675 pid=5818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:03.711000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:29:03.784634 sshd[5804]: Connection closed by 139.178.68.195 port 51458 Dec 12 17:29:03.785509 sshd-session[5801]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:03.789000 audit[5801]: USER_END pid=5801 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:03.790000 audit[5801]: CRED_DISP pid=5801 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:03.794000 audit[5820]: NETFILTER_CFG table=filter:147 family=2 entries=38 op=nft_register_rule pid=5820 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:29:03.799504 systemd[1]: sshd@17-172.31.17.228:22-139.178.68.195:51458.service: Deactivated successfully. Dec 12 17:29:03.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.17.228:22-139.178.68.195:51458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:03.812294 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 17:29:03.794000 audit[5820]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffec066850 a2=0 a3=1 items=0 ppid=3675 pid=5820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:03.794000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:29:03.817138 systemd-logind[1883]: Session 18 logged out. Waiting for processes to exit. Dec 12 17:29:03.837062 systemd-logind[1883]: Removed session 18. Dec 12 17:29:03.841397 systemd[1]: Started sshd@18-172.31.17.228:22-139.178.68.195:51472.service - OpenSSH per-connection server daemon (139.178.68.195:51472). Dec 12 17:29:03.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.17.228:22-139.178.68.195:51472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:03.863000 audit[5820]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=5820 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:29:03.863000 audit[5820]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffec066850 a2=0 a3=1 items=0 ppid=3675 pid=5820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:03.863000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:29:04.042000 audit[5825]: USER_ACCT pid=5825 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:04.043212 sshd[5825]: Accepted publickey for core from 139.178.68.195 port 51472 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:29:04.045000 audit[5825]: CRED_ACQ pid=5825 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:04.045000 audit[5825]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff8fbb7c0 a2=3 a3=0 items=0 ppid=1 pid=5825 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:04.045000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:04.046653 sshd-session[5825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:04.057501 systemd-logind[1883]: New session 19 of user core. Dec 12 17:29:04.068026 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 17:29:04.074000 audit[5825]: USER_START pid=5825 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:04.077000 audit[5828]: CRED_ACQ pid=5828 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:04.577286 sshd[5828]: Connection closed by 139.178.68.195 port 51472 Dec 12 17:29:04.578522 sshd-session[5825]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:04.582000 audit[5825]: USER_END pid=5825 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:04.583000 audit[5825]: CRED_DISP pid=5825 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:04.589388 systemd[1]: sshd@18-172.31.17.228:22-139.178.68.195:51472.service: Deactivated successfully. Dec 12 17:29:04.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.17.228:22-139.178.68.195:51472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:04.598039 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 17:29:04.606046 systemd-logind[1883]: Session 19 logged out. Waiting for processes to exit. Dec 12 17:29:04.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.17.228:22-139.178.68.195:51474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:04.623539 systemd[1]: Started sshd@19-172.31.17.228:22-139.178.68.195:51474.service - OpenSSH per-connection server daemon (139.178.68.195:51474). Dec 12 17:29:04.626391 systemd-logind[1883]: Removed session 19. Dec 12 17:29:04.816000 audit[5838]: USER_ACCT pid=5838 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:04.817870 sshd[5838]: Accepted publickey for core from 139.178.68.195 port 51474 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:29:04.819000 audit[5838]: CRED_ACQ pid=5838 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:04.819000 audit[5838]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffef70ba10 a2=3 a3=0 items=0 ppid=1 pid=5838 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:04.819000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:04.821012 sshd-session[5838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:04.830069 systemd-logind[1883]: New session 20 of user core. Dec 12 17:29:04.843023 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 17:29:04.848000 audit[5838]: USER_START pid=5838 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:04.851000 audit[5841]: CRED_ACQ pid=5841 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:05.040397 sshd[5841]: Connection closed by 139.178.68.195 port 51474 Dec 12 17:29:05.040955 sshd-session[5838]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:05.045000 audit[5838]: USER_END pid=5838 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:05.045000 audit[5838]: CRED_DISP pid=5838 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:05.050247 systemd[1]: sshd@19-172.31.17.228:22-139.178.68.195:51474.service: Deactivated successfully. Dec 12 17:29:05.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.17.228:22-139.178.68.195:51474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:05.055106 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 17:29:05.061613 systemd-logind[1883]: Session 20 logged out. Waiting for processes to exit. Dec 12 17:29:05.064203 systemd-logind[1883]: Removed session 20. Dec 12 17:29:09.328391 kubelet[3528]: E1212 17:29:09.328159 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7d45d746-ssppz" podUID="c3f845e7-b0d1-412d-aad9-3771e0979bfc" Dec 12 17:29:10.079530 systemd[1]: Started sshd@20-172.31.17.228:22-139.178.68.195:51476.service - OpenSSH per-connection server daemon (139.178.68.195:51476). Dec 12 17:29:10.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.17.228:22-139.178.68.195:51476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:10.083628 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 12 17:29:10.083791 kernel: audit: type=1130 audit(1765560550.079:883): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.17.228:22-139.178.68.195:51476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:10.277000 audit[5853]: USER_ACCT pid=5853 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:10.278550 sshd[5853]: Accepted publickey for core from 139.178.68.195 port 51476 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:29:10.284000 audit[5853]: CRED_ACQ pid=5853 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:10.286354 sshd-session[5853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:10.291719 kernel: audit: type=1101 audit(1765560550.277:884): pid=5853 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:10.291871 kernel: audit: type=1103 audit(1765560550.284:885): pid=5853 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:10.295739 kernel: audit: type=1006 audit(1765560550.284:886): pid=5853 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 12 17:29:10.284000 audit[5853]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdc0804c0 a2=3 a3=0 items=0 ppid=1 pid=5853 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:10.303036 kernel: audit: type=1300 audit(1765560550.284:886): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdc0804c0 a2=3 a3=0 items=0 ppid=1 pid=5853 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:10.284000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:10.305933 kernel: audit: type=1327 audit(1765560550.284:886): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:10.310118 systemd-logind[1883]: New session 21 of user core. Dec 12 17:29:10.316122 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 17:29:10.325000 audit[5853]: USER_START pid=5853 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:10.331074 kubelet[3528]: E1212 17:29:10.330163 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5phxm" podUID="5ad89cf6-178c-4c89-9906-56d3d4e0dba0" Dec 12 17:29:10.336000 audit[5856]: CRED_ACQ pid=5856 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:10.344220 kernel: audit: type=1105 audit(1765560550.325:887): pid=5853 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:10.344847 kernel: audit: type=1103 audit(1765560550.336:888): pid=5856 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:10.537029 sshd[5856]: Connection closed by 139.178.68.195 port 51476 Dec 12 17:29:10.537565 sshd-session[5853]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:10.540000 audit[5853]: USER_END pid=5853 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:10.548360 systemd[1]: sshd@20-172.31.17.228:22-139.178.68.195:51476.service: Deactivated successfully. Dec 12 17:29:10.540000 audit[5853]: CRED_DISP pid=5853 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:10.556149 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 17:29:10.558783 kernel: audit: type=1106 audit(1765560550.540:889): pid=5853 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:10.559472 kernel: audit: type=1104 audit(1765560550.540:890): pid=5853 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:10.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.17.228:22-139.178.68.195:51476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:10.563134 systemd-logind[1883]: Session 21 logged out. Waiting for processes to exit. Dec 12 17:29:10.567410 systemd-logind[1883]: Removed session 21. Dec 12 17:29:10.924000 audit[5867]: NETFILTER_CFG table=filter:149 family=2 entries=26 op=nft_register_rule pid=5867 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:29:10.924000 audit[5867]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffebc31fa0 a2=0 a3=1 items=0 ppid=3675 pid=5867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:10.924000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:29:10.932000 audit[5867]: NETFILTER_CFG table=nat:150 family=2 entries=104 op=nft_register_chain pid=5867 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:29:10.932000 audit[5867]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffebc31fa0 a2=0 a3=1 items=0 ppid=3675 pid=5867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:10.932000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:29:12.324396 kubelet[3528]: E1212 17:29:12.323996 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fxswz" podUID="4993f81d-df62-4e56-b3d3-f820e3c156d6" Dec 12 17:29:13.326697 kubelet[3528]: E1212 17:29:13.324966 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6755b5785f-427k6" podUID="b11dd849-e38b-40e1-a2d3-0061a9f777d6" Dec 12 17:29:14.325147 kubelet[3528]: E1212 17:29:14.325069 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5446659b-7cz6s" podUID="464adca8-6b09-4273-9180-6050c84a6f28" Dec 12 17:29:15.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.17.228:22-139.178.68.195:57530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:15.580294 systemd[1]: Started sshd@21-172.31.17.228:22-139.178.68.195:57530.service - OpenSSH per-connection server daemon (139.178.68.195:57530). Dec 12 17:29:15.582417 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 12 17:29:15.583219 kernel: audit: type=1130 audit(1765560555.579:894): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.17.228:22-139.178.68.195:57530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:15.786000 audit[5894]: USER_ACCT pid=5894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:15.787440 sshd[5894]: Accepted publickey for core from 139.178.68.195 port 57530 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:29:15.792000 audit[5894]: CRED_ACQ pid=5894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:15.794510 sshd-session[5894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:15.799671 kernel: audit: type=1101 audit(1765560555.786:895): pid=5894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:15.799825 kernel: audit: type=1103 audit(1765560555.792:896): pid=5894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:15.803583 kernel: audit: type=1006 audit(1765560555.793:897): pid=5894 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 12 17:29:15.804205 kernel: audit: type=1300 audit(1765560555.793:897): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffef8975f0 a2=3 a3=0 items=0 ppid=1 pid=5894 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:15.793000 audit[5894]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffef8975f0 a2=3 a3=0 items=0 ppid=1 pid=5894 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:15.793000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:15.814304 kernel: audit: type=1327 audit(1765560555.793:897): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:15.821810 systemd-logind[1883]: New session 22 of user core. Dec 12 17:29:15.831053 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 17:29:15.837000 audit[5894]: USER_START pid=5894 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:15.846025 kernel: audit: type=1105 audit(1765560555.837:898): pid=5894 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:15.845000 audit[5897]: CRED_ACQ pid=5897 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:15.852715 kernel: audit: type=1103 audit(1765560555.845:899): pid=5897 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:16.112717 sshd[5897]: Connection closed by 139.178.68.195 port 57530 Dec 12 17:29:16.113314 sshd-session[5894]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:16.118000 audit[5894]: USER_END pid=5894 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:16.126687 systemd[1]: sshd@21-172.31.17.228:22-139.178.68.195:57530.service: Deactivated successfully. Dec 12 17:29:16.118000 audit[5894]: CRED_DISP pid=5894 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:16.135067 kernel: audit: type=1106 audit(1765560556.118:900): pid=5894 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:16.135323 kernel: audit: type=1104 audit(1765560556.118:901): pid=5894 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:16.137863 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 17:29:16.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.17.228:22-139.178.68.195:57530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:16.144506 systemd-logind[1883]: Session 22 logged out. Waiting for processes to exit. Dec 12 17:29:16.151943 systemd-logind[1883]: Removed session 22. Dec 12 17:29:16.326856 kubelet[3528]: E1212 17:29:16.326724 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5446659b-48cdq" podUID="4f54519e-b15c-42cf-aa0f-f8649bda1c94" Dec 12 17:29:21.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.17.228:22-139.178.68.195:59118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:21.152365 systemd[1]: Started sshd@22-172.31.17.228:22-139.178.68.195:59118.service - OpenSSH per-connection server daemon (139.178.68.195:59118). Dec 12 17:29:21.153808 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:29:21.153882 kernel: audit: type=1130 audit(1765560561.151:903): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.17.228:22-139.178.68.195:59118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:21.348000 audit[5914]: USER_ACCT pid=5914 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:21.355297 sshd[5914]: Accepted publickey for core from 139.178.68.195 port 59118 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:29:21.356745 kernel: audit: type=1101 audit(1765560561.348:904): pid=5914 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:21.355000 audit[5914]: CRED_ACQ pid=5914 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:21.357161 sshd-session[5914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:21.367214 kernel: audit: type=1103 audit(1765560561.355:905): pid=5914 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:21.367336 kernel: audit: type=1006 audit(1765560561.355:906): pid=5914 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 12 17:29:21.355000 audit[5914]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdc96b900 a2=3 a3=0 items=0 ppid=1 pid=5914 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:21.374716 kernel: audit: type=1300 audit(1765560561.355:906): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdc96b900 a2=3 a3=0 items=0 ppid=1 pid=5914 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:21.355000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:21.377713 kernel: audit: type=1327 audit(1765560561.355:906): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:21.384115 systemd-logind[1883]: New session 23 of user core. Dec 12 17:29:21.392100 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 17:29:21.400000 audit[5914]: USER_START pid=5914 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:21.410399 kernel: audit: type=1105 audit(1765560561.400:907): pid=5914 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:21.410556 kernel: audit: type=1103 audit(1765560561.409:908): pid=5917 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:21.409000 audit[5917]: CRED_ACQ pid=5917 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:21.636689 sshd[5917]: Connection closed by 139.178.68.195 port 59118 Dec 12 17:29:21.634527 sshd-session[5914]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:21.639000 audit[5914]: USER_END pid=5914 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:21.639000 audit[5914]: CRED_DISP pid=5914 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:21.653614 systemd[1]: sshd@22-172.31.17.228:22-139.178.68.195:59118.service: Deactivated successfully. Dec 12 17:29:21.654703 kernel: audit: type=1106 audit(1765560561.639:909): pid=5914 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:21.654871 kernel: audit: type=1104 audit(1765560561.639:910): pid=5914 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:21.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.17.228:22-139.178.68.195:59118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:21.666003 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 17:29:21.674517 systemd-logind[1883]: Session 23 logged out. Waiting for processes to exit. Dec 12 17:29:21.677861 systemd-logind[1883]: Removed session 23. Dec 12 17:29:22.327472 kubelet[3528]: E1212 17:29:22.327145 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7d45d746-ssppz" podUID="c3f845e7-b0d1-412d-aad9-3771e0979bfc" Dec 12 17:29:23.328109 kubelet[3528]: E1212 17:29:23.327103 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fxswz" podUID="4993f81d-df62-4e56-b3d3-f820e3c156d6" Dec 12 17:29:25.332210 kubelet[3528]: E1212 17:29:25.331941 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5phxm" podUID="5ad89cf6-178c-4c89-9906-56d3d4e0dba0" Dec 12 17:29:26.673229 systemd[1]: Started sshd@23-172.31.17.228:22-139.178.68.195:59132.service - OpenSSH per-connection server daemon (139.178.68.195:59132). Dec 12 17:29:26.682315 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:29:26.682394 kernel: audit: type=1130 audit(1765560566.672:912): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.17.228:22-139.178.68.195:59132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:26.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.17.228:22-139.178.68.195:59132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:26.903424 sshd[5933]: Accepted publickey for core from 139.178.68.195 port 59132 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:29:26.902000 audit[5933]: USER_ACCT pid=5933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:26.906380 sshd-session[5933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:26.903000 audit[5933]: CRED_ACQ pid=5933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:26.919954 kernel: audit: type=1101 audit(1765560566.902:913): pid=5933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:26.920102 kernel: audit: type=1103 audit(1765560566.903:914): pid=5933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:26.925689 kernel: audit: type=1006 audit(1765560566.903:915): pid=5933 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 12 17:29:26.903000 audit[5933]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffddfe6e60 a2=3 a3=0 items=0 ppid=1 pid=5933 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:26.941063 kernel: audit: type=1300 audit(1765560566.903:915): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffddfe6e60 a2=3 a3=0 items=0 ppid=1 pid=5933 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:26.941191 kernel: audit: type=1327 audit(1765560566.903:915): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:26.903000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:26.944406 systemd-logind[1883]: New session 24 of user core. Dec 12 17:29:26.957064 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 17:29:26.964000 audit[5933]: USER_START pid=5933 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:26.974763 kernel: audit: type=1105 audit(1765560566.964:916): pid=5933 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:26.974000 audit[5936]: CRED_ACQ pid=5936 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:26.982748 kernel: audit: type=1103 audit(1765560566.974:917): pid=5936 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:27.199871 sshd[5936]: Connection closed by 139.178.68.195 port 59132 Dec 12 17:29:27.200845 sshd-session[5933]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:27.204000 audit[5933]: USER_END pid=5933 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:27.212006 systemd[1]: sshd@23-172.31.17.228:22-139.178.68.195:59132.service: Deactivated successfully. Dec 12 17:29:27.204000 audit[5933]: CRED_DISP pid=5933 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:27.219695 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 17:29:27.220069 kernel: audit: type=1106 audit(1765560567.204:918): pid=5933 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:27.220351 kernel: audit: type=1104 audit(1765560567.204:919): pid=5933 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:27.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.17.228:22-139.178.68.195:59132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:27.228711 systemd-logind[1883]: Session 24 logged out. Waiting for processes to exit. Dec 12 17:29:27.230965 systemd-logind[1883]: Removed session 24. Dec 12 17:29:28.325042 kubelet[3528]: E1212 17:29:28.324960 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5446659b-48cdq" podUID="4f54519e-b15c-42cf-aa0f-f8649bda1c94" Dec 12 17:29:28.327861 kubelet[3528]: E1212 17:29:28.327790 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6755b5785f-427k6" podUID="b11dd849-e38b-40e1-a2d3-0061a9f777d6" Dec 12 17:29:29.331453 kubelet[3528]: E1212 17:29:29.330492 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5446659b-7cz6s" podUID="464adca8-6b09-4273-9180-6050c84a6f28" Dec 12 17:29:32.242964 systemd[1]: Started sshd@24-172.31.17.228:22-139.178.68.195:42900.service - OpenSSH per-connection server daemon (139.178.68.195:42900). Dec 12 17:29:32.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.17.228:22-139.178.68.195:42900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:32.246875 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:29:32.246993 kernel: audit: type=1130 audit(1765560572.244:921): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.17.228:22-139.178.68.195:42900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:32.474000 audit[5948]: USER_ACCT pid=5948 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:32.481319 sshd[5948]: Accepted publickey for core from 139.178.68.195 port 42900 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:29:32.482000 audit[5948]: CRED_ACQ pid=5948 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:32.489675 kernel: audit: type=1101 audit(1765560572.474:922): pid=5948 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:32.489762 kernel: audit: type=1103 audit(1765560572.482:923): pid=5948 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:32.493996 kernel: audit: type=1006 audit(1765560572.482:924): pid=5948 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 12 17:29:32.482000 audit[5948]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc6b304a0 a2=3 a3=0 items=0 ppid=1 pid=5948 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:32.500981 kernel: audit: type=1300 audit(1765560572.482:924): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc6b304a0 a2=3 a3=0 items=0 ppid=1 pid=5948 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:32.482000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:32.504159 kernel: audit: type=1327 audit(1765560572.482:924): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:32.506302 sshd-session[5948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:32.518860 systemd-logind[1883]: New session 25 of user core. Dec 12 17:29:32.529098 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 12 17:29:32.538000 audit[5948]: USER_START pid=5948 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:32.546737 kernel: audit: type=1105 audit(1765560572.538:925): pid=5948 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:32.550000 audit[5951]: CRED_ACQ pid=5951 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:32.556730 kernel: audit: type=1103 audit(1765560572.550:926): pid=5951 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:32.782537 sshd[5951]: Connection closed by 139.178.68.195 port 42900 Dec 12 17:29:32.783410 sshd-session[5948]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:32.785000 audit[5948]: USER_END pid=5948 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:32.797363 systemd-logind[1883]: Session 25 logged out. Waiting for processes to exit. Dec 12 17:29:32.785000 audit[5948]: CRED_DISP pid=5948 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:32.798937 systemd[1]: sshd@24-172.31.17.228:22-139.178.68.195:42900.service: Deactivated successfully. Dec 12 17:29:32.804744 kernel: audit: type=1106 audit(1765560572.785:927): pid=5948 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:32.804867 kernel: audit: type=1104 audit(1765560572.785:928): pid=5948 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:32.805547 systemd[1]: session-25.scope: Deactivated successfully. Dec 12 17:29:32.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.17.228:22-139.178.68.195:42900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:32.816894 systemd-logind[1883]: Removed session 25. Dec 12 17:29:36.325202 kubelet[3528]: E1212 17:29:36.325036 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7d45d746-ssppz" podUID="c3f845e7-b0d1-412d-aad9-3771e0979bfc" Dec 12 17:29:37.331170 kubelet[3528]: E1212 17:29:37.330953 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fxswz" podUID="4993f81d-df62-4e56-b3d3-f820e3c156d6" Dec 12 17:29:37.833408 systemd[1]: Started sshd@25-172.31.17.228:22-139.178.68.195:42902.service - OpenSSH per-connection server daemon (139.178.68.195:42902). Dec 12 17:29:37.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.17.228:22-139.178.68.195:42902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:37.837322 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:29:37.837400 kernel: audit: type=1130 audit(1765560577.834:930): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.17.228:22-139.178.68.195:42902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:38.044000 audit[5971]: USER_ACCT pid=5971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:38.045306 sshd[5971]: Accepted publickey for core from 139.178.68.195 port 42902 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:29:38.052724 kernel: audit: type=1101 audit(1765560578.044:931): pid=5971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:38.053000 audit[5971]: CRED_ACQ pid=5971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:38.055825 sshd-session[5971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:38.064443 kernel: audit: type=1103 audit(1765560578.053:932): pid=5971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:38.064730 kernel: audit: type=1006 audit(1765560578.053:933): pid=5971 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 12 17:29:38.053000 audit[5971]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffebad7c00 a2=3 a3=0 items=0 ppid=1 pid=5971 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:38.072046 kernel: audit: type=1300 audit(1765560578.053:933): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffebad7c00 a2=3 a3=0 items=0 ppid=1 pid=5971 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:38.053000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:38.076334 kernel: audit: type=1327 audit(1765560578.053:933): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:38.086046 systemd-logind[1883]: New session 26 of user core. Dec 12 17:29:38.093483 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 12 17:29:38.107000 audit[5971]: USER_START pid=5971 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:38.118792 kernel: audit: type=1105 audit(1765560578.107:934): pid=5971 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:38.119000 audit[5974]: CRED_ACQ pid=5974 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:38.131761 kernel: audit: type=1103 audit(1765560578.119:935): pid=5974 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:38.365903 sshd[5974]: Connection closed by 139.178.68.195 port 42902 Dec 12 17:29:38.367597 sshd-session[5971]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:38.372000 audit[5971]: USER_END pid=5971 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:38.373000 audit[5971]: CRED_DISP pid=5971 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:38.388573 systemd[1]: sshd@25-172.31.17.228:22-139.178.68.195:42902.service: Deactivated successfully. Dec 12 17:29:38.394506 kernel: audit: type=1106 audit(1765560578.372:936): pid=5971 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:38.394846 kernel: audit: type=1104 audit(1765560578.373:937): pid=5971 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:38.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.17.228:22-139.178.68.195:42902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:38.401527 systemd[1]: session-26.scope: Deactivated successfully. Dec 12 17:29:38.409390 systemd-logind[1883]: Session 26 logged out. Waiting for processes to exit. Dec 12 17:29:38.413321 systemd-logind[1883]: Removed session 26. Dec 12 17:29:39.329256 containerd[1911]: time="2025-12-12T17:29:39.328814322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:29:39.644243 containerd[1911]: time="2025-12-12T17:29:39.643997576Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:39.646382 containerd[1911]: time="2025-12-12T17:29:39.646248248Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:29:39.646814 containerd[1911]: time="2025-12-12T17:29:39.646304660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:39.647295 kubelet[3528]: E1212 17:29:39.647226 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:29:39.648295 kubelet[3528]: E1212 17:29:39.647306 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:29:39.648295 kubelet[3528]: E1212 17:29:39.647588 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pq5rm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6755b5785f-427k6_calico-system(b11dd849-e38b-40e1-a2d3-0061a9f777d6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:39.648989 kubelet[3528]: E1212 17:29:39.648881 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6755b5785f-427k6" podUID="b11dd849-e38b-40e1-a2d3-0061a9f777d6" Dec 12 17:29:40.325364 kubelet[3528]: E1212 17:29:40.325142 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5446659b-48cdq" podUID="4f54519e-b15c-42cf-aa0f-f8649bda1c94" Dec 12 17:29:40.327511 containerd[1911]: time="2025-12-12T17:29:40.327403159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:29:40.626080 containerd[1911]: time="2025-12-12T17:29:40.625839273Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:40.628214 containerd[1911]: time="2025-12-12T17:29:40.628071897Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:29:40.628565 containerd[1911]: time="2025-12-12T17:29:40.628135965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:40.629280 kubelet[3528]: E1212 17:29:40.629223 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:29:40.629407 kubelet[3528]: E1212 17:29:40.629316 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:29:40.630182 kubelet[3528]: E1212 17:29:40.630103 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzhvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5phxm_calico-system(5ad89cf6-178c-4c89-9906-56d3d4e0dba0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:40.634162 containerd[1911]: time="2025-12-12T17:29:40.633810069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:29:40.888402 containerd[1911]: time="2025-12-12T17:29:40.887189158Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:40.892456 containerd[1911]: time="2025-12-12T17:29:40.891822334Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:29:40.892849 containerd[1911]: time="2025-12-12T17:29:40.892676302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:40.893439 kubelet[3528]: E1212 17:29:40.893323 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:29:40.893439 kubelet[3528]: E1212 17:29:40.893406 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:29:40.896209 kubelet[3528]: E1212 17:29:40.893594 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzhvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5phxm_calico-system(5ad89cf6-178c-4c89-9906-56d3d4e0dba0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:40.896209 kubelet[3528]: E1212 17:29:40.895279 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5phxm" podUID="5ad89cf6-178c-4c89-9906-56d3d4e0dba0" Dec 12 17:29:43.326582 containerd[1911]: time="2025-12-12T17:29:43.325979626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:29:43.603612 containerd[1911]: time="2025-12-12T17:29:43.603532488Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:43.605966 containerd[1911]: time="2025-12-12T17:29:43.605897700Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:29:43.606068 containerd[1911]: time="2025-12-12T17:29:43.606009504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:43.606357 kubelet[3528]: E1212 17:29:43.606308 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:29:43.607330 kubelet[3528]: E1212 17:29:43.606574 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:29:43.607330 kubelet[3528]: E1212 17:29:43.606828 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jc92w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b5446659b-7cz6s_calico-apiserver(464adca8-6b09-4273-9180-6050c84a6f28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:43.608128 kubelet[3528]: E1212 17:29:43.608077 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5446659b-7cz6s" podUID="464adca8-6b09-4273-9180-6050c84a6f28" Dec 12 17:29:47.325393 containerd[1911]: time="2025-12-12T17:29:47.325292966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:29:47.607158 containerd[1911]: time="2025-12-12T17:29:47.607038231Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:47.609321 containerd[1911]: time="2025-12-12T17:29:47.609252327Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:29:47.609441 containerd[1911]: time="2025-12-12T17:29:47.609362331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:47.609650 kubelet[3528]: E1212 17:29:47.609591 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:29:47.610182 kubelet[3528]: E1212 17:29:47.609680 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:29:47.610182 kubelet[3528]: E1212 17:29:47.609837 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9f0be7f5aecb4214a695a0df6daf94fa,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8zbvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b7d45d746-ssppz_calico-system(c3f845e7-b0d1-412d-aad9-3771e0979bfc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:47.612507 containerd[1911]: time="2025-12-12T17:29:47.612193599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:29:47.945606 containerd[1911]: time="2025-12-12T17:29:47.945456533Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:47.948422 containerd[1911]: time="2025-12-12T17:29:47.948281165Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:29:47.948422 containerd[1911]: time="2025-12-12T17:29:47.948362561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:47.948648 kubelet[3528]: E1212 17:29:47.948566 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:29:47.948648 kubelet[3528]: E1212 17:29:47.948625 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:29:47.949589 kubelet[3528]: E1212 17:29:47.948822 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8zbvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b7d45d746-ssppz_calico-system(c3f845e7-b0d1-412d-aad9-3771e0979bfc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:47.950247 kubelet[3528]: E1212 17:29:47.950196 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7d45d746-ssppz" podUID="c3f845e7-b0d1-412d-aad9-3771e0979bfc" Dec 12 17:29:51.604407 systemd[1]: cri-containerd-8ae192b3c40c001770f9f8bcd04be93c0782377c95ce0df3d503c690f231006f.scope: Deactivated successfully. Dec 12 17:29:51.607054 systemd[1]: cri-containerd-8ae192b3c40c001770f9f8bcd04be93c0782377c95ce0df3d503c690f231006f.scope: Consumed 24.890s CPU time, 106.2M memory peak. Dec 12 17:29:51.609000 audit: BPF prog-id=152 op=UNLOAD Dec 12 17:29:51.613328 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:29:51.613415 kernel: audit: type=1334 audit(1765560591.609:939): prog-id=152 op=UNLOAD Dec 12 17:29:51.609000 audit: BPF prog-id=156 op=UNLOAD Dec 12 17:29:51.617608 kernel: audit: type=1334 audit(1765560591.609:940): prog-id=156 op=UNLOAD Dec 12 17:29:51.618782 containerd[1911]: time="2025-12-12T17:29:51.618485839Z" level=info msg="received container exit event container_id:\"8ae192b3c40c001770f9f8bcd04be93c0782377c95ce0df3d503c690f231006f\" id:\"8ae192b3c40c001770f9f8bcd04be93c0782377c95ce0df3d503c690f231006f\" pid:3854 exit_status:1 exited_at:{seconds:1765560591 nanos:616507975}" Dec 12 17:29:51.660689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ae192b3c40c001770f9f8bcd04be93c0782377c95ce0df3d503c690f231006f-rootfs.mount: Deactivated successfully. Dec 12 17:29:52.205442 kubelet[3528]: E1212 17:29:52.205039 3528 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-228?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:29:52.224544 kubelet[3528]: I1212 17:29:52.224259 3528 scope.go:117] "RemoveContainer" containerID="8ae192b3c40c001770f9f8bcd04be93c0782377c95ce0df3d503c690f231006f" Dec 12 17:29:52.228880 containerd[1911]: time="2025-12-12T17:29:52.228536070Z" level=info msg="CreateContainer within sandbox \"a2d9c5069c5c76b24385ec1ac4c0e128878b5991547ec01878d864b81c8f3542\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 12 17:29:52.249755 containerd[1911]: time="2025-12-12T17:29:52.247578415Z" level=info msg="Container 3b489a3751a3c2f2c56007c2476b70c1ce6299fed1aebe5044108429e986b3e9: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:52.261468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4228874167.mount: Deactivated successfully. Dec 12 17:29:52.265812 containerd[1911]: time="2025-12-12T17:29:52.265603651Z" level=info msg="CreateContainer within sandbox \"a2d9c5069c5c76b24385ec1ac4c0e128878b5991547ec01878d864b81c8f3542\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"3b489a3751a3c2f2c56007c2476b70c1ce6299fed1aebe5044108429e986b3e9\"" Dec 12 17:29:52.267394 containerd[1911]: time="2025-12-12T17:29:52.267123127Z" level=info msg="StartContainer for \"3b489a3751a3c2f2c56007c2476b70c1ce6299fed1aebe5044108429e986b3e9\"" Dec 12 17:29:52.269259 containerd[1911]: time="2025-12-12T17:29:52.269152243Z" level=info msg="connecting to shim 3b489a3751a3c2f2c56007c2476b70c1ce6299fed1aebe5044108429e986b3e9" address="unix:///run/containerd/s/e74a6ed098ce1b5321d9990f9d417bc998c5999cfe1e8d5368a7c070443c03da" protocol=ttrpc version=3 Dec 12 17:29:52.307041 systemd[1]: Started cri-containerd-3b489a3751a3c2f2c56007c2476b70c1ce6299fed1aebe5044108429e986b3e9.scope - libcontainer container 3b489a3751a3c2f2c56007c2476b70c1ce6299fed1aebe5044108429e986b3e9. Dec 12 17:29:52.325058 containerd[1911]: time="2025-12-12T17:29:52.324937807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:29:52.337000 audit: BPF prog-id=262 op=LOAD Dec 12 17:29:52.339000 audit: BPF prog-id=263 op=LOAD Dec 12 17:29:52.342028 kernel: audit: type=1334 audit(1765560592.337:941): prog-id=262 op=LOAD Dec 12 17:29:52.342194 kernel: audit: type=1334 audit(1765560592.339:942): prog-id=263 op=LOAD Dec 12 17:29:52.342246 kernel: audit: type=1300 audit(1765560592.339:942): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3631 pid=6045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:52.339000 audit[6045]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3631 pid=6045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:52.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362343839613337353161336332663263353630303763323437366237 Dec 12 17:29:52.354400 kernel: audit: type=1327 audit(1765560592.339:942): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362343839613337353161336332663263353630303763323437366237 Dec 12 17:29:52.354544 kernel: audit: type=1334 audit(1765560592.339:943): prog-id=263 op=UNLOAD Dec 12 17:29:52.339000 audit: BPF prog-id=263 op=UNLOAD Dec 12 17:29:52.339000 audit[6045]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3631 pid=6045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:52.361859 kernel: audit: type=1300 audit(1765560592.339:943): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3631 pid=6045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:52.367923 kernel: audit: type=1327 audit(1765560592.339:943): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362343839613337353161336332663263353630303763323437366237 Dec 12 17:29:52.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362343839613337353161336332663263353630303763323437366237 Dec 12 17:29:52.339000 audit: BPF prog-id=264 op=LOAD Dec 12 17:29:52.369528 kernel: audit: type=1334 audit(1765560592.339:944): prog-id=264 op=LOAD Dec 12 17:29:52.339000 audit[6045]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3631 pid=6045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:52.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362343839613337353161336332663263353630303763323437366237 Dec 12 17:29:52.341000 audit: BPF prog-id=265 op=LOAD Dec 12 17:29:52.341000 audit[6045]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3631 pid=6045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:52.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362343839613337353161336332663263353630303763323437366237 Dec 12 17:29:52.347000 audit: BPF prog-id=265 op=UNLOAD Dec 12 17:29:52.347000 audit[6045]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3631 pid=6045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:52.347000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362343839613337353161336332663263353630303763323437366237 Dec 12 17:29:52.347000 audit: BPF prog-id=264 op=UNLOAD Dec 12 17:29:52.347000 audit[6045]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3631 pid=6045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:52.347000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362343839613337353161336332663263353630303763323437366237 Dec 12 17:29:52.347000 audit: BPF prog-id=266 op=LOAD Dec 12 17:29:52.347000 audit[6045]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3631 pid=6045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:52.347000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362343839613337353161336332663263353630303763323437366237 Dec 12 17:29:52.411393 containerd[1911]: time="2025-12-12T17:29:52.411245227Z" level=info msg="StartContainer for \"3b489a3751a3c2f2c56007c2476b70c1ce6299fed1aebe5044108429e986b3e9\" returns successfully" Dec 12 17:29:52.567194 systemd[1]: cri-containerd-61ff1feafc52636fba3fb2f695b8b251f972191d7edb2a5ff71e4758843b51a2.scope: Deactivated successfully. Dec 12 17:29:52.567853 systemd[1]: cri-containerd-61ff1feafc52636fba3fb2f695b8b251f972191d7edb2a5ff71e4758843b51a2.scope: Consumed 6.536s CPU time, 61.6M memory peak, 256K read from disk. Dec 12 17:29:52.571000 audit: BPF prog-id=267 op=LOAD Dec 12 17:29:52.571000 audit: BPF prog-id=94 op=UNLOAD Dec 12 17:29:52.572000 audit: BPF prog-id=109 op=UNLOAD Dec 12 17:29:52.572000 audit: BPF prog-id=113 op=UNLOAD Dec 12 17:29:52.579003 containerd[1911]: time="2025-12-12T17:29:52.578830544Z" level=info msg="received container exit event container_id:\"61ff1feafc52636fba3fb2f695b8b251f972191d7edb2a5ff71e4758843b51a2\" id:\"61ff1feafc52636fba3fb2f695b8b251f972191d7edb2a5ff71e4758843b51a2\" pid:3086 exit_status:1 exited_at:{seconds:1765560592 nanos:578065208}" Dec 12 17:29:52.617006 containerd[1911]: time="2025-12-12T17:29:52.616938896Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:52.620573 containerd[1911]: time="2025-12-12T17:29:52.620368448Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:29:52.620573 containerd[1911]: time="2025-12-12T17:29:52.620492744Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:52.621894 kubelet[3528]: E1212 17:29:52.621832 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:29:52.622073 kubelet[3528]: E1212 17:29:52.621902 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:29:52.622179 kubelet[3528]: E1212 17:29:52.622096 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v6zf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fxswz_calico-system(4993f81d-df62-4e56-b3d3-f820e3c156d6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:52.623899 kubelet[3528]: E1212 17:29:52.623842 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fxswz" podUID="4993f81d-df62-4e56-b3d3-f820e3c156d6" Dec 12 17:29:52.629580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61ff1feafc52636fba3fb2f695b8b251f972191d7edb2a5ff71e4758843b51a2-rootfs.mount: Deactivated successfully. Dec 12 17:29:53.231893 kubelet[3528]: I1212 17:29:53.231634 3528 scope.go:117] "RemoveContainer" containerID="61ff1feafc52636fba3fb2f695b8b251f972191d7edb2a5ff71e4758843b51a2" Dec 12 17:29:53.237295 containerd[1911]: time="2025-12-12T17:29:53.237226579Z" level=info msg="CreateContainer within sandbox \"ecc0713f4452cb01518b93d2a10976bb1ee3ccc3d64c74cbbfcee1a71177c039\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 12 17:29:53.259392 containerd[1911]: time="2025-12-12T17:29:53.259024472Z" level=info msg="Container 307a84081106a169de12837504ca97c345a94f90c2b46d058115e2c8fed3e1e0: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:53.281267 containerd[1911]: time="2025-12-12T17:29:53.281198948Z" level=info msg="CreateContainer within sandbox \"ecc0713f4452cb01518b93d2a10976bb1ee3ccc3d64c74cbbfcee1a71177c039\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"307a84081106a169de12837504ca97c345a94f90c2b46d058115e2c8fed3e1e0\"" Dec 12 17:29:53.282390 containerd[1911]: time="2025-12-12T17:29:53.282340388Z" level=info msg="StartContainer for \"307a84081106a169de12837504ca97c345a94f90c2b46d058115e2c8fed3e1e0\"" Dec 12 17:29:53.284873 containerd[1911]: time="2025-12-12T17:29:53.284774420Z" level=info msg="connecting to shim 307a84081106a169de12837504ca97c345a94f90c2b46d058115e2c8fed3e1e0" address="unix:///run/containerd/s/66f7a9c42dae5bb5888110d5e88fc64d6ef76523eb959ad8b564477de9096fe0" protocol=ttrpc version=3 Dec 12 17:29:53.324032 systemd[1]: Started cri-containerd-307a84081106a169de12837504ca97c345a94f90c2b46d058115e2c8fed3e1e0.scope - libcontainer container 307a84081106a169de12837504ca97c345a94f90c2b46d058115e2c8fed3e1e0. Dec 12 17:29:53.329771 kubelet[3528]: E1212 17:29:53.329573 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5phxm" podUID="5ad89cf6-178c-4c89-9906-56d3d4e0dba0" Dec 12 17:29:53.356000 audit: BPF prog-id=268 op=LOAD Dec 12 17:29:53.357000 audit: BPF prog-id=269 op=LOAD Dec 12 17:29:53.357000 audit[6089]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2934 pid=6089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:53.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330376138343038313130366131363964653132383337353034636139 Dec 12 17:29:53.357000 audit: BPF prog-id=269 op=UNLOAD Dec 12 17:29:53.357000 audit[6089]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=6089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:53.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330376138343038313130366131363964653132383337353034636139 Dec 12 17:29:53.358000 audit: BPF prog-id=270 op=LOAD Dec 12 17:29:53.358000 audit[6089]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2934 pid=6089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:53.358000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330376138343038313130366131363964653132383337353034636139 Dec 12 17:29:53.358000 audit: BPF prog-id=271 op=LOAD Dec 12 17:29:53.358000 audit[6089]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2934 pid=6089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:53.358000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330376138343038313130366131363964653132383337353034636139 Dec 12 17:29:53.358000 audit: BPF prog-id=271 op=UNLOAD Dec 12 17:29:53.358000 audit[6089]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=6089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:53.358000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330376138343038313130366131363964653132383337353034636139 Dec 12 17:29:53.358000 audit: BPF prog-id=270 op=UNLOAD Dec 12 17:29:53.358000 audit[6089]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=6089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:53.358000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330376138343038313130366131363964653132383337353034636139 Dec 12 17:29:53.359000 audit: BPF prog-id=272 op=LOAD Dec 12 17:29:53.359000 audit[6089]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2934 pid=6089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:53.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330376138343038313130366131363964653132383337353034636139 Dec 12 17:29:53.432590 containerd[1911]: time="2025-12-12T17:29:53.432531440Z" level=info msg="StartContainer for \"307a84081106a169de12837504ca97c345a94f90c2b46d058115e2c8fed3e1e0\" returns successfully" Dec 12 17:29:54.324564 containerd[1911]: time="2025-12-12T17:29:54.324499689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:29:54.326324 kubelet[3528]: E1212 17:29:54.324908 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6755b5785f-427k6" podUID="b11dd849-e38b-40e1-a2d3-0061a9f777d6" Dec 12 17:29:54.603107 containerd[1911]: time="2025-12-12T17:29:54.602895802Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:54.605329 containerd[1911]: time="2025-12-12T17:29:54.605094994Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:29:54.605329 containerd[1911]: time="2025-12-12T17:29:54.605239114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:54.605549 kubelet[3528]: E1212 17:29:54.605459 3528 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:29:54.605549 kubelet[3528]: E1212 17:29:54.605519 3528 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:29:54.605807 kubelet[3528]: E1212 17:29:54.605724 3528 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvxv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b5446659b-48cdq_calico-apiserver(4f54519e-b15c-42cf-aa0f-f8649bda1c94): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:54.606988 kubelet[3528]: E1212 17:29:54.606941 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5446659b-48cdq" podUID="4f54519e-b15c-42cf-aa0f-f8649bda1c94" Dec 12 17:29:57.713893 systemd[1]: cri-containerd-755f8cecce3f6011efbc588147c4a7ad5747677178bb59031d7a48e9c6d4d28c.scope: Deactivated successfully. Dec 12 17:29:57.715398 systemd[1]: cri-containerd-755f8cecce3f6011efbc588147c4a7ad5747677178bb59031d7a48e9c6d4d28c.scope: Consumed 4.409s CPU time, 21.4M memory peak, 332K read from disk. Dec 12 17:29:57.715000 audit: BPF prog-id=273 op=LOAD Dec 12 17:29:57.718624 kernel: kauditd_printk_skb: 40 callbacks suppressed Dec 12 17:29:57.718850 kernel: audit: type=1334 audit(1765560597.715:961): prog-id=273 op=LOAD Dec 12 17:29:57.721254 containerd[1911]: time="2025-12-12T17:29:57.721198214Z" level=info msg="received container exit event container_id:\"755f8cecce3f6011efbc588147c4a7ad5747677178bb59031d7a48e9c6d4d28c\" id:\"755f8cecce3f6011efbc588147c4a7ad5747677178bb59031d7a48e9c6d4d28c\" pid:3096 exit_status:1 exited_at:{seconds:1765560597 nanos:720627638}" Dec 12 17:29:57.715000 audit: BPF prog-id=99 op=UNLOAD Dec 12 17:29:57.724354 kernel: audit: type=1334 audit(1765560597.715:962): prog-id=99 op=UNLOAD Dec 12 17:29:57.720000 audit: BPF prog-id=114 op=UNLOAD Dec 12 17:29:57.727230 kernel: audit: type=1334 audit(1765560597.720:963): prog-id=114 op=UNLOAD Dec 12 17:29:57.727617 kernel: audit: type=1334 audit(1765560597.720:964): prog-id=118 op=UNLOAD Dec 12 17:29:57.720000 audit: BPF prog-id=118 op=UNLOAD Dec 12 17:29:57.772673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-755f8cecce3f6011efbc588147c4a7ad5747677178bb59031d7a48e9c6d4d28c-rootfs.mount: Deactivated successfully. Dec 12 17:29:58.268702 kubelet[3528]: I1212 17:29:58.268626 3528 scope.go:117] "RemoveContainer" containerID="755f8cecce3f6011efbc588147c4a7ad5747677178bb59031d7a48e9c6d4d28c" Dec 12 17:29:58.272222 containerd[1911]: time="2025-12-12T17:29:58.272173776Z" level=info msg="CreateContainer within sandbox \"9208360a0a5479dcbbea6d6565672ef270ce58f639d1fd6e0187503d9e20b4fb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 12 17:29:58.298719 containerd[1911]: time="2025-12-12T17:29:58.294164761Z" level=info msg="Container a71d25322468c9a7de8fdd54f8cd5b90d4f5c87b25a03f5066eb1bd54e94d695: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:58.298556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2609022877.mount: Deactivated successfully. Dec 12 17:29:58.317145 containerd[1911]: time="2025-12-12T17:29:58.317073649Z" level=info msg="CreateContainer within sandbox \"9208360a0a5479dcbbea6d6565672ef270ce58f639d1fd6e0187503d9e20b4fb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a71d25322468c9a7de8fdd54f8cd5b90d4f5c87b25a03f5066eb1bd54e94d695\"" Dec 12 17:29:58.318038 containerd[1911]: time="2025-12-12T17:29:58.317990797Z" level=info msg="StartContainer for \"a71d25322468c9a7de8fdd54f8cd5b90d4f5c87b25a03f5066eb1bd54e94d695\"" Dec 12 17:29:58.320295 containerd[1911]: time="2025-12-12T17:29:58.320235697Z" level=info msg="connecting to shim a71d25322468c9a7de8fdd54f8cd5b90d4f5c87b25a03f5066eb1bd54e94d695" address="unix:///run/containerd/s/922d4b1877b21ea61c091444366624dc4fd525f94357720e7652aaa90e7ec905" protocol=ttrpc version=3 Dec 12 17:29:58.327591 kubelet[3528]: E1212 17:29:58.327534 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5446659b-7cz6s" podUID="464adca8-6b09-4273-9180-6050c84a6f28" Dec 12 17:29:58.363104 systemd[1]: Started cri-containerd-a71d25322468c9a7de8fdd54f8cd5b90d4f5c87b25a03f5066eb1bd54e94d695.scope - libcontainer container a71d25322468c9a7de8fdd54f8cd5b90d4f5c87b25a03f5066eb1bd54e94d695. Dec 12 17:29:58.386000 audit: BPF prog-id=274 op=LOAD Dec 12 17:29:58.390905 kernel: audit: type=1334 audit(1765560598.386:965): prog-id=274 op=LOAD Dec 12 17:29:58.391031 kernel: audit: type=1334 audit(1765560598.388:966): prog-id=275 op=LOAD Dec 12 17:29:58.388000 audit: BPF prog-id=275 op=LOAD Dec 12 17:29:58.388000 audit[6140]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=2968 pid=6140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:58.398310 kernel: audit: type=1300 audit(1765560598.388:966): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=2968 pid=6140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:58.398570 kernel: audit: type=1327 audit(1765560598.388:966): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137316432353332323436386339613764653866646435346638636435 Dec 12 17:29:58.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137316432353332323436386339613764653866646435346638636435 Dec 12 17:29:58.389000 audit: BPF prog-id=275 op=UNLOAD Dec 12 17:29:58.405980 kernel: audit: type=1334 audit(1765560598.389:967): prog-id=275 op=UNLOAD Dec 12 17:29:58.389000 audit[6140]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2968 pid=6140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:58.412560 kernel: audit: type=1300 audit(1765560598.389:967): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2968 pid=6140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:58.389000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137316432353332323436386339613764653866646435346638636435 Dec 12 17:29:58.389000 audit: BPF prog-id=276 op=LOAD Dec 12 17:29:58.389000 audit[6140]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=2968 pid=6140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:58.389000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137316432353332323436386339613764653866646435346638636435 Dec 12 17:29:58.391000 audit: BPF prog-id=277 op=LOAD Dec 12 17:29:58.391000 audit[6140]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=2968 pid=6140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:58.391000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137316432353332323436386339613764653866646435346638636435 Dec 12 17:29:58.396000 audit: BPF prog-id=277 op=UNLOAD Dec 12 17:29:58.396000 audit[6140]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2968 pid=6140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:58.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137316432353332323436386339613764653866646435346638636435 Dec 12 17:29:58.396000 audit: BPF prog-id=276 op=UNLOAD Dec 12 17:29:58.396000 audit[6140]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2968 pid=6140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:58.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137316432353332323436386339613764653866646435346638636435 Dec 12 17:29:58.396000 audit: BPF prog-id=278 op=LOAD Dec 12 17:29:58.396000 audit[6140]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=2968 pid=6140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:58.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137316432353332323436386339613764653866646435346638636435 Dec 12 17:29:58.473691 containerd[1911]: time="2025-12-12T17:29:58.473626645Z" level=info msg="StartContainer for \"a71d25322468c9a7de8fdd54f8cd5b90d4f5c87b25a03f5066eb1bd54e94d695\" returns successfully" Dec 12 17:30:00.324696 kubelet[3528]: E1212 17:30:00.324600 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7d45d746-ssppz" podUID="c3f845e7-b0d1-412d-aad9-3771e0979bfc" Dec 12 17:30:02.205870 kubelet[3528]: E1212 17:30:02.205637 3528 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-228?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:30:03.897287 systemd[1]: cri-containerd-3b489a3751a3c2f2c56007c2476b70c1ce6299fed1aebe5044108429e986b3e9.scope: Deactivated successfully. Dec 12 17:30:03.899000 audit: BPF prog-id=262 op=UNLOAD Dec 12 17:30:03.902047 containerd[1911]: time="2025-12-12T17:30:03.900915068Z" level=info msg="received container exit event container_id:\"3b489a3751a3c2f2c56007c2476b70c1ce6299fed1aebe5044108429e986b3e9\" id:\"3b489a3751a3c2f2c56007c2476b70c1ce6299fed1aebe5044108429e986b3e9\" pid:6057 exit_status:1 exited_at:{seconds:1765560603 nanos:900429044}" Dec 12 17:30:03.902512 kernel: kauditd_printk_skb: 16 callbacks suppressed Dec 12 17:30:03.902566 kernel: audit: type=1334 audit(1765560603.899:973): prog-id=262 op=UNLOAD Dec 12 17:30:03.899000 audit: BPF prog-id=266 op=UNLOAD Dec 12 17:30:03.905786 kernel: audit: type=1334 audit(1765560603.899:974): prog-id=266 op=UNLOAD Dec 12 17:30:03.948025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b489a3751a3c2f2c56007c2476b70c1ce6299fed1aebe5044108429e986b3e9-rootfs.mount: Deactivated successfully. Dec 12 17:30:04.305868 kubelet[3528]: I1212 17:30:04.305224 3528 scope.go:117] "RemoveContainer" containerID="8ae192b3c40c001770f9f8bcd04be93c0782377c95ce0df3d503c690f231006f" Dec 12 17:30:04.307209 kubelet[3528]: I1212 17:30:04.307089 3528 scope.go:117] "RemoveContainer" containerID="3b489a3751a3c2f2c56007c2476b70c1ce6299fed1aebe5044108429e986b3e9" Dec 12 17:30:04.307772 kubelet[3528]: E1212 17:30:04.307583 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-6fjl5_tigera-operator(e910d3af-f6ed-46c3-b4af-89fd07a8b3ed)\"" pod="tigera-operator/tigera-operator-7dcd859c48-6fjl5" podUID="e910d3af-f6ed-46c3-b4af-89fd07a8b3ed" Dec 12 17:30:04.309852 containerd[1911]: time="2025-12-12T17:30:04.309799110Z" level=info msg="RemoveContainer for \"8ae192b3c40c001770f9f8bcd04be93c0782377c95ce0df3d503c690f231006f\"" Dec 12 17:30:04.318705 containerd[1911]: time="2025-12-12T17:30:04.318565734Z" level=info msg="RemoveContainer for \"8ae192b3c40c001770f9f8bcd04be93c0782377c95ce0df3d503c690f231006f\" returns successfully" Dec 12 17:30:05.324889 kubelet[3528]: E1212 17:30:05.324804 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6755b5785f-427k6" podUID="b11dd849-e38b-40e1-a2d3-0061a9f777d6" Dec 12 17:30:07.325216 kubelet[3528]: E1212 17:30:07.324925 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fxswz" podUID="4993f81d-df62-4e56-b3d3-f820e3c156d6" Dec 12 17:30:07.326751 kubelet[3528]: E1212 17:30:07.326631 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5phxm" podUID="5ad89cf6-178c-4c89-9906-56d3d4e0dba0" Dec 12 17:30:10.326273 kubelet[3528]: E1212 17:30:10.325791 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5446659b-7cz6s" podUID="464adca8-6b09-4273-9180-6050c84a6f28" Dec 12 17:30:10.326273 kubelet[3528]: E1212 17:30:10.326018 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5446659b-48cdq" podUID="4f54519e-b15c-42cf-aa0f-f8649bda1c94" Dec 12 17:30:12.207257 kubelet[3528]: E1212 17:30:12.206813 3528 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-228?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:30:12.325886 kubelet[3528]: E1212 17:30:12.325613 3528 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7d45d746-ssppz" podUID="c3f845e7-b0d1-412d-aad9-3771e0979bfc"