Dec 12 17:25:57.087754 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 12 17:25:57.087805 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Dec 12 15:17:36 -00 2025 Dec 12 17:25:57.087831 kernel: KASLR disabled due to lack of seed Dec 12 17:25:57.087847 kernel: efi: EFI v2.7 by EDK II Dec 12 17:25:57.087864 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Dec 12 17:25:57.087880 kernel: secureboot: Secure boot disabled Dec 12 17:25:57.087898 kernel: ACPI: Early table checksum verification disabled Dec 12 17:25:57.087913 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 12 17:25:57.087930 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 12 17:25:57.087950 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 12 17:25:57.087967 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 12 17:25:57.087982 kernel: ACPI: FACS 0x0000000078630000 000040 Dec 12 17:25:57.087998 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 12 17:25:57.088014 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 12 17:25:57.088037 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 12 17:25:57.088054 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 12 17:25:57.088072 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 12 17:25:57.088088 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 12 17:25:57.088105 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 12 17:25:57.088122 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 12 17:25:57.088138 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 12 17:25:57.088155 kernel: printk: legacy bootconsole [uart0] enabled Dec 12 17:25:57.088172 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 12 17:25:57.088189 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 12 17:25:57.088210 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Dec 12 17:25:57.088227 kernel: Zone ranges: Dec 12 17:25:57.088244 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 12 17:25:57.088260 kernel: DMA32 empty Dec 12 17:25:57.088304 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 12 17:25:57.088354 kernel: Device empty Dec 12 17:25:57.088373 kernel: Movable zone start for each node Dec 12 17:25:57.088390 kernel: Early memory node ranges Dec 12 17:25:57.088407 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 12 17:25:57.088424 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 12 17:25:57.088441 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 12 17:25:57.088457 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 12 17:25:57.088482 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 12 17:25:57.088498 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 12 17:25:57.088515 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 12 17:25:57.088532 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 12 17:25:57.088556 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 12 17:25:57.088579 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 12 17:25:57.088598 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Dec 12 17:25:57.088617 kernel: psci: probing for conduit method from ACPI. Dec 12 17:25:57.088634 kernel: psci: PSCIv1.0 detected in firmware. Dec 12 17:25:57.088652 kernel: psci: Using standard PSCI v0.2 function IDs Dec 12 17:25:57.088670 kernel: psci: Trusted OS migration not required Dec 12 17:25:57.088688 kernel: psci: SMC Calling Convention v1.1 Dec 12 17:25:57.088707 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Dec 12 17:25:57.088724 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 12 17:25:57.088747 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 12 17:25:57.088765 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 12 17:25:57.088783 kernel: Detected PIPT I-cache on CPU0 Dec 12 17:25:57.088800 kernel: CPU features: detected: GIC system register CPU interface Dec 12 17:25:57.088818 kernel: CPU features: detected: Spectre-v2 Dec 12 17:25:57.088835 kernel: CPU features: detected: Spectre-v3a Dec 12 17:25:57.088853 kernel: CPU features: detected: Spectre-BHB Dec 12 17:25:57.088870 kernel: CPU features: detected: ARM erratum 1742098 Dec 12 17:25:57.088888 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 12 17:25:57.088906 kernel: alternatives: applying boot alternatives Dec 12 17:25:57.088926 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 12 17:25:57.088952 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 17:25:57.089463 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 17:25:57.089483 kernel: Fallback order for Node 0: 0 Dec 12 17:25:57.089501 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Dec 12 17:25:57.089518 kernel: Policy zone: Normal Dec 12 17:25:57.089536 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 17:25:57.089553 kernel: software IO TLB: area num 2. Dec 12 17:25:57.089571 kernel: software IO TLB: mapped [mem 0x000000006f800000-0x0000000073800000] (64MB) Dec 12 17:25:57.089589 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 12 17:25:57.089606 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 17:25:57.089634 kernel: rcu: RCU event tracing is enabled. Dec 12 17:25:57.089653 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 12 17:25:57.089671 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 17:25:57.089689 kernel: Tracing variant of Tasks RCU enabled. Dec 12 17:25:57.089707 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 17:25:57.089724 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 12 17:25:57.089742 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 17:25:57.089760 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 17:25:57.089777 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 12 17:25:57.089795 kernel: GICv3: 96 SPIs implemented Dec 12 17:25:57.089812 kernel: GICv3: 0 Extended SPIs implemented Dec 12 17:25:57.089834 kernel: Root IRQ handler: gic_handle_irq Dec 12 17:25:57.089852 kernel: GICv3: GICv3 features: 16 PPIs Dec 12 17:25:57.089869 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 12 17:25:57.089886 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 12 17:25:57.089904 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 12 17:25:57.089921 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Dec 12 17:25:57.089940 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Dec 12 17:25:57.089958 kernel: GICv3: using LPI property table @0x0000000400110000 Dec 12 17:25:57.089975 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 12 17:25:57.089993 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Dec 12 17:25:57.090010 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 17:25:57.090032 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 12 17:25:57.090050 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 12 17:25:57.090068 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 12 17:25:57.090085 kernel: Console: colour dummy device 80x25 Dec 12 17:25:57.090104 kernel: printk: legacy console [tty1] enabled Dec 12 17:25:57.090123 kernel: ACPI: Core revision 20240827 Dec 12 17:25:57.090141 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 12 17:25:57.090160 kernel: pid_max: default: 32768 minimum: 301 Dec 12 17:25:57.090182 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 17:25:57.090201 kernel: landlock: Up and running. Dec 12 17:25:57.090219 kernel: SELinux: Initializing. Dec 12 17:25:57.090237 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:25:57.090255 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:25:57.090273 kernel: rcu: Hierarchical SRCU implementation. Dec 12 17:25:57.093346 kernel: rcu: Max phase no-delay instances is 400. Dec 12 17:25:57.093378 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 17:25:57.093409 kernel: Remapping and enabling EFI services. Dec 12 17:25:57.093427 kernel: smp: Bringing up secondary CPUs ... Dec 12 17:25:57.093446 kernel: Detected PIPT I-cache on CPU1 Dec 12 17:25:57.093464 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 12 17:25:57.093482 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Dec 12 17:25:57.093501 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 12 17:25:57.093519 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 17:25:57.093542 kernel: SMP: Total of 2 processors activated. Dec 12 17:25:57.093560 kernel: CPU: All CPU(s) started at EL1 Dec 12 17:25:57.093589 kernel: CPU features: detected: 32-bit EL0 Support Dec 12 17:25:57.093613 kernel: CPU features: detected: 32-bit EL1 Support Dec 12 17:25:57.093632 kernel: CPU features: detected: CRC32 instructions Dec 12 17:25:57.093651 kernel: alternatives: applying system-wide alternatives Dec 12 17:25:57.093672 kernel: Memory: 3823468K/4030464K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12416K init, 1038K bss, 185652K reserved, 16384K cma-reserved) Dec 12 17:25:57.093692 kernel: devtmpfs: initialized Dec 12 17:25:57.093717 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 17:25:57.093736 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 12 17:25:57.093755 kernel: 23664 pages in range for non-PLT usage Dec 12 17:25:57.093774 kernel: 515184 pages in range for PLT usage Dec 12 17:25:57.093793 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 17:25:57.093817 kernel: SMBIOS 3.0.0 present. Dec 12 17:25:57.093835 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 12 17:25:57.093854 kernel: DMI: Memory slots populated: 0/0 Dec 12 17:25:57.093873 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 17:25:57.093892 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 12 17:25:57.093912 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 12 17:25:57.093931 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 12 17:25:57.093954 kernel: audit: initializing netlink subsys (disabled) Dec 12 17:25:57.093973 kernel: audit: type=2000 audit(0.225:1): state=initialized audit_enabled=0 res=1 Dec 12 17:25:57.093992 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 17:25:57.094010 kernel: cpuidle: using governor menu Dec 12 17:25:57.094029 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 12 17:25:57.094048 kernel: ASID allocator initialised with 65536 entries Dec 12 17:25:57.094067 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 17:25:57.094091 kernel: Serial: AMBA PL011 UART driver Dec 12 17:25:57.094110 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 17:25:57.094129 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 17:25:57.094148 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 12 17:25:57.094167 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 12 17:25:57.094186 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 17:25:57.094205 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 17:25:57.094228 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 12 17:25:57.094247 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 12 17:25:57.094266 kernel: ACPI: Added _OSI(Module Device) Dec 12 17:25:57.094355 kernel: ACPI: Added _OSI(Processor Device) Dec 12 17:25:57.094379 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 17:25:57.094399 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 17:25:57.094419 kernel: ACPI: Interpreter enabled Dec 12 17:25:57.094445 kernel: ACPI: Using GIC for interrupt routing Dec 12 17:25:57.094465 kernel: ACPI: MCFG table detected, 1 entries Dec 12 17:25:57.094485 kernel: ACPI: CPU0 has been hot-added Dec 12 17:25:57.094504 kernel: ACPI: CPU1 has been hot-added Dec 12 17:25:57.094524 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Dec 12 17:25:57.094934 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 17:25:57.095220 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 17:25:57.095602 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 17:25:57.095907 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Dec 12 17:25:57.096211 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Dec 12 17:25:57.096246 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 12 17:25:57.096266 kernel: acpiphp: Slot [1] registered Dec 12 17:25:57.103917 kernel: acpiphp: Slot [2] registered Dec 12 17:25:57.103964 kernel: acpiphp: Slot [3] registered Dec 12 17:25:57.103984 kernel: acpiphp: Slot [4] registered Dec 12 17:25:57.104003 kernel: acpiphp: Slot [5] registered Dec 12 17:25:57.104023 kernel: acpiphp: Slot [6] registered Dec 12 17:25:57.104042 kernel: acpiphp: Slot [7] registered Dec 12 17:25:57.104061 kernel: acpiphp: Slot [8] registered Dec 12 17:25:57.104079 kernel: acpiphp: Slot [9] registered Dec 12 17:25:57.104098 kernel: acpiphp: Slot [10] registered Dec 12 17:25:57.104122 kernel: acpiphp: Slot [11] registered Dec 12 17:25:57.104141 kernel: acpiphp: Slot [12] registered Dec 12 17:25:57.104160 kernel: acpiphp: Slot [13] registered Dec 12 17:25:57.104180 kernel: acpiphp: Slot [14] registered Dec 12 17:25:57.104199 kernel: acpiphp: Slot [15] registered Dec 12 17:25:57.104218 kernel: acpiphp: Slot [16] registered Dec 12 17:25:57.104237 kernel: acpiphp: Slot [17] registered Dec 12 17:25:57.104261 kernel: acpiphp: Slot [18] registered Dec 12 17:25:57.104319 kernel: acpiphp: Slot [19] registered Dec 12 17:25:57.104342 kernel: acpiphp: Slot [20] registered Dec 12 17:25:57.104362 kernel: acpiphp: Slot [21] registered Dec 12 17:25:57.104381 kernel: acpiphp: Slot [22] registered Dec 12 17:25:57.104401 kernel: acpiphp: Slot [23] registered Dec 12 17:25:57.104420 kernel: acpiphp: Slot [24] registered Dec 12 17:25:57.104445 kernel: acpiphp: Slot [25] registered Dec 12 17:25:57.104465 kernel: acpiphp: Slot [26] registered Dec 12 17:25:57.104485 kernel: acpiphp: Slot [27] registered Dec 12 17:25:57.104504 kernel: acpiphp: Slot [28] registered Dec 12 17:25:57.104523 kernel: acpiphp: Slot [29] registered Dec 12 17:25:57.104542 kernel: acpiphp: Slot [30] registered Dec 12 17:25:57.104561 kernel: acpiphp: Slot [31] registered Dec 12 17:25:57.104580 kernel: PCI host bridge to bus 0000:00 Dec 12 17:25:57.104946 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 12 17:25:57.105256 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 12 17:25:57.107458 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 12 17:25:57.107730 kernel: pci_bus 0000:00: root bus resource [bus 00] Dec 12 17:25:57.108061 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Dec 12 17:25:57.108418 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Dec 12 17:25:57.108701 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Dec 12 17:25:57.109027 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Dec 12 17:25:57.110572 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Dec 12 17:25:57.110898 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 12 17:25:57.111224 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Dec 12 17:25:57.111547 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Dec 12 17:25:57.111839 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Dec 12 17:25:57.112127 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Dec 12 17:25:57.112469 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 12 17:25:57.112751 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 12 17:25:57.113064 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 12 17:25:57.113393 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 12 17:25:57.113430 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 12 17:25:57.113451 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 12 17:25:57.113472 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 12 17:25:57.113493 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 12 17:25:57.113514 kernel: iommu: Default domain type: Translated Dec 12 17:25:57.113544 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 12 17:25:57.113564 kernel: efivars: Registered efivars operations Dec 12 17:25:57.113584 kernel: vgaarb: loaded Dec 12 17:25:57.113604 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 12 17:25:57.113624 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 17:25:57.113644 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 17:25:57.113665 kernel: pnp: PnP ACPI init Dec 12 17:25:57.113999 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 12 17:25:57.114038 kernel: pnp: PnP ACPI: found 1 devices Dec 12 17:25:57.114060 kernel: NET: Registered PF_INET protocol family Dec 12 17:25:57.114080 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 17:25:57.114100 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 17:25:57.114121 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 17:25:57.114141 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:25:57.114171 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 17:25:57.114191 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 17:25:57.114211 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:25:57.114231 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:25:57.114251 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 17:25:57.114272 kernel: PCI: CLS 0 bytes, default 64 Dec 12 17:25:57.114392 kernel: kvm [1]: HYP mode not available Dec 12 17:25:57.114422 kernel: Initialise system trusted keyrings Dec 12 17:25:57.114443 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 17:25:57.114464 kernel: Key type asymmetric registered Dec 12 17:25:57.114484 kernel: Asymmetric key parser 'x509' registered Dec 12 17:25:57.117205 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 12 17:25:57.117229 kernel: io scheduler mq-deadline registered Dec 12 17:25:57.117250 kernel: io scheduler kyber registered Dec 12 17:25:57.117304 kernel: io scheduler bfq registered Dec 12 17:25:57.117713 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 12 17:25:57.117756 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 12 17:25:57.117778 kernel: ACPI: button: Power Button [PWRB] Dec 12 17:25:57.117799 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 12 17:25:57.117819 kernel: ACPI: button: Sleep Button [SLPB] Dec 12 17:25:57.117859 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 17:25:57.117881 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 12 17:25:57.118226 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 12 17:25:57.118265 kernel: printk: legacy console [ttyS0] disabled Dec 12 17:25:57.118336 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 12 17:25:57.118359 kernel: printk: legacy console [ttyS0] enabled Dec 12 17:25:57.118379 kernel: printk: legacy bootconsole [uart0] disabled Dec 12 17:25:57.118414 kernel: thunder_xcv, ver 1.0 Dec 12 17:25:57.118434 kernel: thunder_bgx, ver 1.0 Dec 12 17:25:57.118453 kernel: nicpf, ver 1.0 Dec 12 17:25:57.118473 kernel: nicvf, ver 1.0 Dec 12 17:25:57.118828 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 12 17:25:57.119104 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-12T17:25:53 UTC (1765560353) Dec 12 17:25:57.119131 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 17:25:57.119160 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Dec 12 17:25:57.119180 kernel: watchdog: NMI not fully supported Dec 12 17:25:57.119199 kernel: NET: Registered PF_INET6 protocol family Dec 12 17:25:57.119218 kernel: watchdog: Hard watchdog permanently disabled Dec 12 17:25:57.119237 kernel: Segment Routing with IPv6 Dec 12 17:25:57.119256 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 17:25:57.119275 kernel: NET: Registered PF_PACKET protocol family Dec 12 17:25:57.119337 kernel: Key type dns_resolver registered Dec 12 17:25:57.119357 kernel: registered taskstats version 1 Dec 12 17:25:57.119377 kernel: Loading compiled-in X.509 certificates Dec 12 17:25:57.119397 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: a5d527f63342895c4af575176d4ae6e640b6d0e9' Dec 12 17:25:57.119417 kernel: Demotion targets for Node 0: null Dec 12 17:25:57.119436 kernel: Key type .fscrypt registered Dec 12 17:25:57.119456 kernel: Key type fscrypt-provisioning registered Dec 12 17:25:57.119480 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 17:25:57.119500 kernel: ima: Allocated hash algorithm: sha1 Dec 12 17:25:57.119519 kernel: ima: No architecture policies found Dec 12 17:25:57.119541 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 12 17:25:57.119561 kernel: clk: Disabling unused clocks Dec 12 17:25:57.119582 kernel: PM: genpd: Disabling unused power domains Dec 12 17:25:57.119602 kernel: Freeing unused kernel memory: 12416K Dec 12 17:25:57.119622 kernel: Run /init as init process Dec 12 17:25:57.119647 kernel: with arguments: Dec 12 17:25:57.119667 kernel: /init Dec 12 17:25:57.119687 kernel: with environment: Dec 12 17:25:57.119706 kernel: HOME=/ Dec 12 17:25:57.119726 kernel: TERM=linux Dec 12 17:25:57.119747 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 12 17:25:57.120060 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 12 17:25:57.120355 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 12 17:25:57.120392 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 17:25:57.120413 kernel: GPT:25804799 != 33554431 Dec 12 17:25:57.120432 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 17:25:57.120452 kernel: GPT:25804799 != 33554431 Dec 12 17:25:57.120472 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 17:25:57.120501 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 12 17:25:57.120521 kernel: SCSI subsystem initialized Dec 12 17:25:57.120541 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 17:25:57.120560 kernel: device-mapper: uevent: version 1.0.3 Dec 12 17:25:57.120580 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 17:25:57.120601 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 17:25:57.120620 kernel: raid6: neonx8 gen() 6562 MB/s Dec 12 17:25:57.120647 kernel: raid6: neonx4 gen() 6561 MB/s Dec 12 17:25:57.120668 kernel: raid6: neonx2 gen() 5447 MB/s Dec 12 17:25:57.120687 kernel: raid6: neonx1 gen() 3955 MB/s Dec 12 17:25:57.120707 kernel: raid6: int64x8 gen() 3648 MB/s Dec 12 17:25:57.120726 kernel: raid6: int64x4 gen() 3717 MB/s Dec 12 17:25:57.120745 kernel: raid6: int64x2 gen() 3575 MB/s Dec 12 17:25:57.120764 kernel: raid6: int64x1 gen() 2370 MB/s Dec 12 17:25:57.120789 kernel: raid6: using algorithm neonx8 gen() 6562 MB/s Dec 12 17:25:57.120814 kernel: raid6: .... xor() 4635 MB/s, rmw enabled Dec 12 17:25:57.120835 kernel: raid6: using neon recovery algorithm Dec 12 17:25:57.120856 kernel: xor: measuring software checksum speed Dec 12 17:25:57.120876 kernel: 8regs : 12248 MB/sec Dec 12 17:25:57.120895 kernel: 32regs : 12371 MB/sec Dec 12 17:25:57.120916 kernel: arm64_neon : 8697 MB/sec Dec 12 17:25:57.120940 kernel: xor: using function: 32regs (12371 MB/sec) Dec 12 17:25:57.120960 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 17:25:57.120999 kernel: BTRFS: device fsid d09b8b5a-fb5f-4a17-94ef-0a452535b2bc devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (221) Dec 12 17:25:57.121029 kernel: BTRFS info (device dm-0): first mount of filesystem d09b8b5a-fb5f-4a17-94ef-0a452535b2bc Dec 12 17:25:57.121049 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:25:57.121069 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 12 17:25:57.121088 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 17:25:57.121117 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 17:25:57.121137 kernel: loop: module loaded Dec 12 17:25:57.121156 kernel: loop0: detected capacity change from 0 to 91480 Dec 12 17:25:57.121176 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 17:25:57.121198 systemd[1]: Successfully made /usr/ read-only. Dec 12 17:25:57.121226 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:25:57.121259 systemd[1]: Detected virtualization amazon. Dec 12 17:25:57.121323 systemd[1]: Detected architecture arm64. Dec 12 17:25:57.121350 systemd[1]: Running in initrd. Dec 12 17:25:57.121371 systemd[1]: No hostname configured, using default hostname. Dec 12 17:25:57.121392 systemd[1]: Hostname set to . Dec 12 17:25:57.121413 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 17:25:57.121434 systemd[1]: Queued start job for default target initrd.target. Dec 12 17:25:57.121463 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:25:57.121483 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:25:57.121504 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:25:57.121527 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 17:25:57.121549 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:25:57.121593 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 17:25:57.121616 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 17:25:57.121639 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:25:57.121662 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:25:57.121684 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:25:57.121711 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:25:57.121733 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:25:57.121754 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:25:57.121776 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:25:57.121797 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:25:57.121819 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:25:57.121841 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:25:57.121866 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 17:25:57.121888 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 17:25:57.121909 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:25:57.121931 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:25:57.121954 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:25:57.121976 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:25:57.121999 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 17:25:57.122027 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 17:25:57.122049 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:25:57.122070 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 17:25:57.122092 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 17:25:57.122114 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 17:25:57.122136 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:25:57.122157 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:25:57.122184 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:25:57.122206 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 17:25:57.122233 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:25:57.122255 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:25:57.122312 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 17:25:57.122399 systemd-journald[361]: Collecting audit messages is enabled. Dec 12 17:25:57.122450 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:25:57.122473 kernel: audit: type=1130 audit(1765560357.095:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:57.122495 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 17:25:57.122516 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:25:57.122542 systemd-journald[361]: Journal started Dec 12 17:25:57.122579 systemd-journald[361]: Runtime Journal (/run/log/journal/ec2db0cc917a780930b9ad9a7b928ada) is 8M, max 75.3M, 67.3M free. Dec 12 17:25:57.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:57.133114 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:25:57.133209 kernel: audit: type=1130 audit(1765560357.123:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:57.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:57.136607 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:25:57.161799 systemd-tmpfiles[377]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 17:25:57.175006 systemd-modules-load[362]: Inserted module 'br_netfilter' Dec 12 17:25:57.176667 kernel: Bridge firewalling registered Dec 12 17:25:57.178208 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:25:57.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:57.181875 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:25:57.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:57.197074 kernel: audit: type=1130 audit(1765560357.180:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:57.205048 kernel: audit: type=1130 audit(1765560357.180:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:57.205095 kernel: audit: type=1130 audit(1765560357.196:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:57.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:57.197961 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:25:57.212556 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:25:57.224088 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:25:57.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:57.235320 kernel: audit: type=1130 audit(1765560357.229:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:57.238948 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 17:25:57.270898 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:25:57.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:57.280000 audit: BPF prog-id=6 op=LOAD Dec 12 17:25:57.283098 kernel: audit: type=1130 audit(1765560357.269:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:57.283175 kernel: audit: type=1334 audit(1765560357.280:9): prog-id=6 op=LOAD Dec 12 17:25:57.284087 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:25:57.287415 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:25:57.305207 kernel: audit: type=1130 audit(1765560357.294:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:57.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:57.306532 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 17:25:57.361363 dracut-cmdline[402]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 12 17:25:57.466885 systemd-resolved[401]: Positive Trust Anchors: Dec 12 17:25:57.469364 systemd-resolved[401]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:25:57.472885 systemd-resolved[401]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 17:25:57.473166 systemd-resolved[401]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:25:57.672324 kernel: Loading iSCSI transport class v2.0-870. Dec 12 17:25:57.734638 kernel: iscsi: registered transport (tcp) Dec 12 17:25:57.760314 kernel: random: crng init done Dec 12 17:25:57.760740 systemd-resolved[401]: Defaulting to hostname 'linux'. Dec 12 17:25:57.777358 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:25:57.793111 kernel: audit: type=1130 audit(1765560357.782:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:57.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:57.783668 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:25:57.806915 kernel: iscsi: registered transport (qla4xxx) Dec 12 17:25:57.807001 kernel: QLogic iSCSI HBA Driver Dec 12 17:25:57.853517 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:25:57.901707 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:25:57.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:57.905522 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:25:58.014583 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 17:25:58.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:58.020520 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 17:25:58.033405 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 17:25:58.105399 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:25:58.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:58.108000 audit: BPF prog-id=7 op=LOAD Dec 12 17:25:58.108000 audit: BPF prog-id=8 op=LOAD Dec 12 17:25:58.114098 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:25:58.183833 systemd-udevd[642]: Using default interface naming scheme 'v257'. Dec 12 17:25:58.208083 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:25:58.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:58.219740 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 17:25:58.280371 dracut-pre-trigger[707]: rd.md=0: removing MD RAID activation Dec 12 17:25:58.298932 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:25:58.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:58.305000 audit: BPF prog-id=9 op=LOAD Dec 12 17:25:58.308133 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:25:58.352430 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:25:58.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:58.360935 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:25:58.412882 systemd-networkd[759]: lo: Link UP Dec 12 17:25:58.412898 systemd-networkd[759]: lo: Gained carrier Dec 12 17:25:58.417803 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:25:58.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:58.423569 systemd[1]: Reached target network.target - Network. Dec 12 17:25:58.538558 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:25:58.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:58.555643 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 17:25:58.781734 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:25:58.784637 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:25:58.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:58.791408 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:25:58.801342 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:25:58.817693 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 12 17:25:58.817780 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 12 17:25:58.827424 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 12 17:25:58.827919 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 12 17:25:58.828273 kernel: nvme nvme0: using unchecked data buffer Dec 12 17:25:58.841321 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:68:e2:53:92:53 Dec 12 17:25:58.843951 (udev-worker)[784]: Network interface NamePolicy= disabled on kernel command line. Dec 12 17:25:58.871578 systemd-networkd[759]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:25:58.873946 systemd-networkd[759]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:25:58.885941 systemd-networkd[759]: eth0: Link UP Dec 12 17:25:58.886659 systemd-networkd[759]: eth0: Gained carrier Dec 12 17:25:58.886951 systemd-networkd[759]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:25:58.899090 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:25:58.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:58.909463 systemd-networkd[759]: eth0: DHCPv4 address 172.31.16.55/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 12 17:25:58.984140 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 12 17:25:59.015579 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 12 17:25:59.048562 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 17:25:59.083411 disk-uuid[896]: Primary Header is updated. Dec 12 17:25:59.083411 disk-uuid[896]: Secondary Entries is updated. Dec 12 17:25:59.083411 disk-uuid[896]: Secondary Header is updated. Dec 12 17:25:59.144157 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 12 17:25:59.232035 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 12 17:25:59.524519 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 17:25:59.531265 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:25:59.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:59.538889 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:25:59.541976 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:25:59.550774 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 17:25:59.592390 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:25:59.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:00.212916 disk-uuid[898]: Warning: The kernel is still using the old partition table. Dec 12 17:26:00.212916 disk-uuid[898]: The new table will be used at the next reboot or after you Dec 12 17:26:00.212916 disk-uuid[898]: run partprobe(8) or kpartx(8) Dec 12 17:26:00.212916 disk-uuid[898]: The operation has completed successfully. Dec 12 17:26:00.230987 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 17:26:00.231457 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 17:26:00.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:00.239607 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 17:26:00.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:00.312337 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1099) Dec 12 17:26:00.316979 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:26:00.317050 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:26:00.356773 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 12 17:26:00.356854 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 12 17:26:00.367367 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:26:00.368346 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 17:26:00.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:00.378516 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 17:26:00.572527 systemd-networkd[759]: eth0: Gained IPv6LL Dec 12 17:26:01.803211 ignition[1118]: Ignition 2.22.0 Dec 12 17:26:01.803241 ignition[1118]: Stage: fetch-offline Dec 12 17:26:01.807260 ignition[1118]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:26:01.807355 ignition[1118]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 17:26:01.812544 ignition[1118]: Ignition finished successfully Dec 12 17:26:01.816393 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:26:01.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:01.822671 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 17:26:01.886506 ignition[1125]: Ignition 2.22.0 Dec 12 17:26:01.886540 ignition[1125]: Stage: fetch Dec 12 17:26:01.888161 ignition[1125]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:26:01.888421 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 17:26:01.888618 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 17:26:01.909616 ignition[1125]: PUT result: OK Dec 12 17:26:01.915659 ignition[1125]: parsed url from cmdline: "" Dec 12 17:26:01.915688 ignition[1125]: no config URL provided Dec 12 17:26:01.915707 ignition[1125]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:26:01.915742 ignition[1125]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:26:01.915778 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 17:26:01.924145 ignition[1125]: PUT result: OK Dec 12 17:26:01.924311 ignition[1125]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 12 17:26:01.931920 ignition[1125]: GET result: OK Dec 12 17:26:01.932345 ignition[1125]: parsing config with SHA512: 08f7e110630c82ac0a202ecb1967a3742fbbd3db1f48ba5ee36a5525f38444622f47989638c4123304018983f2cda39bee8cf2b8d7b999ea52f0d4a236861b74 Dec 12 17:26:01.949187 unknown[1125]: fetched base config from "system" Dec 12 17:26:01.949258 unknown[1125]: fetched base config from "system" Dec 12 17:26:01.950846 ignition[1125]: fetch: fetch complete Dec 12 17:26:01.949273 unknown[1125]: fetched user config from "aws" Dec 12 17:26:01.950860 ignition[1125]: fetch: fetch passed Dec 12 17:26:01.951046 ignition[1125]: Ignition finished successfully Dec 12 17:26:01.964375 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 17:26:01.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:01.972274 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 17:26:02.029907 ignition[1131]: Ignition 2.22.0 Dec 12 17:26:02.029941 ignition[1131]: Stage: kargs Dec 12 17:26:02.030610 ignition[1131]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:26:02.030646 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 17:26:02.030827 ignition[1131]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 17:26:02.041462 ignition[1131]: PUT result: OK Dec 12 17:26:02.046693 ignition[1131]: kargs: kargs passed Dec 12 17:26:02.047060 ignition[1131]: Ignition finished successfully Dec 12 17:26:02.054428 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 17:26:02.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:02.059303 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 17:26:02.122244 ignition[1137]: Ignition 2.22.0 Dec 12 17:26:02.124239 ignition[1137]: Stage: disks Dec 12 17:26:02.126477 ignition[1137]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:26:02.126518 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 17:26:02.126679 ignition[1137]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 17:26:02.132126 ignition[1137]: PUT result: OK Dec 12 17:26:02.141120 ignition[1137]: disks: disks passed Dec 12 17:26:02.141234 ignition[1137]: Ignition finished successfully Dec 12 17:26:02.143789 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 17:26:02.159400 kernel: kauditd_printk_skb: 21 callbacks suppressed Dec 12 17:26:02.159442 kernel: audit: type=1130 audit(1765560362.148:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:02.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:02.159908 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 17:26:02.160922 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 17:26:02.163654 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:26:02.167432 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:26:02.168152 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:26:02.177234 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 17:26:02.300944 systemd-fsck[1145]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 12 17:26:02.308724 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 17:26:02.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:02.321569 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 17:26:02.326222 kernel: audit: type=1130 audit(1765560362.310:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:02.572325 kernel: EXT4-fs (nvme0n1p9): mounted filesystem fa93fc03-2e23-46f9-9013-1e396e3304a8 r/w with ordered data mode. Quota mode: none. Dec 12 17:26:02.573273 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 17:26:02.578248 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 17:26:02.631303 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:26:02.635959 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 17:26:02.643025 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 17:26:02.644149 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 17:26:02.644207 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:26:02.674878 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 17:26:02.680059 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 17:26:02.701335 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1164) Dec 12 17:26:02.708402 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:26:02.708474 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:26:02.716239 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 12 17:26:02.716343 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 12 17:26:02.719258 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:26:04.042386 initrd-setup-root[1188]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 17:26:04.052318 initrd-setup-root[1195]: cut: /sysroot/etc/group: No such file or directory Dec 12 17:26:04.061865 initrd-setup-root[1202]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 17:26:04.070904 initrd-setup-root[1209]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 17:26:04.962875 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 17:26:04.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:04.968715 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 17:26:04.981079 kernel: audit: type=1130 audit(1765560364.964:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:04.981596 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 17:26:05.009724 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 17:26:05.012098 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:26:05.048420 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 17:26:05.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:05.058360 kernel: audit: type=1130 audit(1765560365.051:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:05.075661 ignition[1277]: INFO : Ignition 2.22.0 Dec 12 17:26:05.075661 ignition[1277]: INFO : Stage: mount Dec 12 17:26:05.080335 ignition[1277]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:26:05.080335 ignition[1277]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 17:26:05.080335 ignition[1277]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 17:26:05.090684 ignition[1277]: INFO : PUT result: OK Dec 12 17:26:05.092961 ignition[1277]: INFO : mount: mount passed Dec 12 17:26:05.092961 ignition[1277]: INFO : Ignition finished successfully Dec 12 17:26:05.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:05.096603 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 17:26:05.100820 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 17:26:05.117338 kernel: audit: type=1130 audit(1765560365.097:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:05.139420 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:26:05.182330 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1288) Dec 12 17:26:05.186723 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:26:05.186801 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:26:05.194408 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 12 17:26:05.194488 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 12 17:26:05.198910 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:26:05.256417 ignition[1305]: INFO : Ignition 2.22.0 Dec 12 17:26:05.256417 ignition[1305]: INFO : Stage: files Dec 12 17:26:05.262476 ignition[1305]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:26:05.262476 ignition[1305]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 17:26:05.262476 ignition[1305]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 17:26:05.273650 ignition[1305]: INFO : PUT result: OK Dec 12 17:26:05.280965 ignition[1305]: DEBUG : files: compiled without relabeling support, skipping Dec 12 17:26:05.286443 ignition[1305]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 17:26:05.286443 ignition[1305]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 17:26:05.337735 ignition[1305]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 17:26:05.343929 ignition[1305]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 17:26:05.347810 unknown[1305]: wrote ssh authorized keys file for user: core Dec 12 17:26:05.350417 ignition[1305]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 17:26:05.357069 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:26:05.363826 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 12 17:26:05.465326 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 17:26:05.600635 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:26:05.600635 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 17:26:05.610272 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 17:26:05.610272 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:26:05.610272 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:26:05.610272 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:26:05.610272 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:26:05.610272 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:26:05.610272 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:26:05.610272 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:26:05.610272 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:26:05.610272 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:26:05.652295 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:26:05.652295 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:26:05.652295 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Dec 12 17:26:06.075925 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 17:26:06.488505 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:26:06.493696 ignition[1305]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 17:26:06.560967 ignition[1305]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:26:06.571724 ignition[1305]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:26:06.576894 ignition[1305]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 17:26:06.576894 ignition[1305]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 12 17:26:06.576894 ignition[1305]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 17:26:06.576894 ignition[1305]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:26:06.576894 ignition[1305]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:26:06.576894 ignition[1305]: INFO : files: files passed Dec 12 17:26:06.576894 ignition[1305]: INFO : Ignition finished successfully Dec 12 17:26:06.599122 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 17:26:06.612336 kernel: audit: type=1130 audit(1765560366.597:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:06.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:06.603133 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 17:26:06.611594 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 17:26:06.641007 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 17:26:06.642068 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 17:26:06.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:06.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:06.659611 kernel: audit: type=1130 audit(1765560366.647:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:06.659685 kernel: audit: type=1131 audit(1765560366.647:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:06.704671 initrd-setup-root-after-ignition[1337]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:26:06.704671 initrd-setup-root-after-ignition[1337]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:26:06.712874 initrd-setup-root-after-ignition[1341]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:26:06.719396 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:26:06.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:06.725798 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 17:26:06.730344 kernel: audit: type=1130 audit(1765560366.724:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:06.737532 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 17:26:06.832557 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 17:26:06.835131 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 17:26:06.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:06.841650 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 17:26:06.850692 kernel: audit: type=1130 audit(1765560366.839:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:06.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:06.846086 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 17:26:06.856273 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 17:26:06.858108 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 17:26:06.906235 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:26:06.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:06.915698 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 17:26:06.969847 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:26:06.973117 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:26:06.978337 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:26:06.982498 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 17:26:06.990139 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 17:26:06.990516 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:26:06.997737 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 17:26:06.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.003383 systemd[1]: Stopped target basic.target - Basic System. Dec 12 17:26:07.008084 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 17:26:07.011499 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:26:07.017266 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 17:26:07.025614 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:26:07.031931 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 17:26:07.035265 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:26:07.043626 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 17:26:07.048067 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 17:26:07.054448 systemd[1]: Stopped target swap.target - Swaps. Dec 12 17:26:07.057689 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 17:26:07.058009 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:26:07.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.067071 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:26:07.070508 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:26:07.075068 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 17:26:07.080148 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:26:07.083889 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 17:26:07.084162 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 17:26:07.095105 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 17:26:07.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.096126 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:26:07.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.104539 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 17:26:07.105481 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 17:26:07.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.114370 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 17:26:07.119134 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 17:26:07.119503 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:26:07.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.129990 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 17:26:07.135551 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 17:26:07.137834 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:26:07.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.145153 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 17:26:07.149718 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:26:07.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.156255 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 17:26:07.162896 kernel: kauditd_printk_skb: 9 callbacks suppressed Dec 12 17:26:07.162989 kernel: audit: type=1131 audit(1765560367.154:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.159833 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:26:07.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.172324 kernel: audit: type=1131 audit(1765560367.163:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.189224 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 17:26:07.193711 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 17:26:07.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.215351 kernel: audit: type=1130 audit(1765560367.205:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.221853 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 17:26:07.224415 kernel: audit: type=1131 audit(1765560367.205:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.236665 ignition[1361]: INFO : Ignition 2.22.0 Dec 12 17:26:07.236665 ignition[1361]: INFO : Stage: umount Dec 12 17:26:07.240894 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:26:07.240894 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 17:26:07.240894 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 17:26:07.252442 ignition[1361]: INFO : PUT result: OK Dec 12 17:26:07.262735 ignition[1361]: INFO : umount: umount passed Dec 12 17:26:07.262735 ignition[1361]: INFO : Ignition finished successfully Dec 12 17:26:07.270640 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 17:26:07.270920 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 17:26:07.310570 kernel: audit: type=1131 audit(1765560367.280:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.310623 kernel: audit: type=1131 audit(1765560367.283:57): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.310652 kernel: audit: type=1131 audit(1765560367.286:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.310680 kernel: audit: type=1131 audit(1765560367.301:59): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.281818 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 17:26:07.281940 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 17:26:07.284481 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 17:26:07.284603 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 17:26:07.329920 kernel: audit: type=1131 audit(1765560367.311:60): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.287555 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 17:26:07.287685 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 17:26:07.303116 systemd[1]: Stopped target network.target - Network. Dec 12 17:26:07.305429 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 17:26:07.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.305564 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:26:07.312905 systemd[1]: Stopped target paths.target - Path Units. Dec 12 17:26:07.374477 kernel: audit: type=1131 audit(1765560367.354:61): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.313938 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 17:26:07.328860 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:26:07.331995 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 17:26:07.336217 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 17:26:07.339569 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 17:26:07.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.339669 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:26:07.342454 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 17:26:07.342546 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:26:07.347602 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 12 17:26:07.347703 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:26:07.351647 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 17:26:07.351827 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 17:26:07.355998 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 17:26:07.356139 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 17:26:07.363241 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 17:26:07.367671 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 17:26:07.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.395851 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 17:26:07.396080 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 17:26:07.416882 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 17:26:07.417944 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 17:26:07.484096 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 17:26:07.508000 audit: BPF prog-id=6 op=UNLOAD Dec 12 17:26:07.515000 audit: BPF prog-id=9 op=UNLOAD Dec 12 17:26:07.517451 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 17:26:07.517770 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:26:07.532152 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 17:26:07.536769 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 17:26:07.545231 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:26:07.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.561788 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:26:07.562128 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:26:07.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.571095 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 17:26:07.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.571232 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 17:26:07.574700 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:26:07.583807 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 17:26:07.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.584016 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 17:26:07.591454 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 17:26:07.592811 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 17:26:07.623695 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 17:26:07.628423 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:26:07.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.635446 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 17:26:07.635740 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 17:26:07.640546 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 17:26:07.640636 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:26:07.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.644476 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 17:26:07.645041 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:26:07.655050 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 17:26:07.655619 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 17:26:07.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.669998 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 17:26:07.671556 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:26:07.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.684620 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 17:26:07.694465 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 17:26:07.697120 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:26:07.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.705381 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 17:26:07.705704 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:26:07.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.715526 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:26:07.716234 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:26:07.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.726128 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 17:26:07.727107 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 17:26:07.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.746457 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 17:26:07.748673 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 17:26:07.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.760141 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 17:26:07.766753 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 17:26:07.815273 systemd[1]: Switching root. Dec 12 17:26:07.892593 systemd-journald[361]: Journal stopped Dec 12 17:26:11.748313 systemd-journald[361]: Received SIGTERM from PID 1 (systemd). Dec 12 17:26:11.748457 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 17:26:11.748516 kernel: SELinux: policy capability open_perms=1 Dec 12 17:26:11.748548 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 17:26:11.748580 kernel: SELinux: policy capability always_check_network=0 Dec 12 17:26:11.748613 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 17:26:11.748646 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 17:26:11.748678 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 17:26:11.748712 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 17:26:11.748748 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 17:26:11.748780 systemd[1]: Successfully loaded SELinux policy in 144.913ms. Dec 12 17:26:11.748835 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 18.162ms. Dec 12 17:26:11.748870 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:26:11.748903 systemd[1]: Detected virtualization amazon. Dec 12 17:26:11.748936 systemd[1]: Detected architecture arm64. Dec 12 17:26:11.748968 systemd[1]: Detected first boot. Dec 12 17:26:11.749023 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 17:26:11.749062 zram_generator::config[1404]: No configuration found. Dec 12 17:26:11.749116 kernel: NET: Registered PF_VSOCK protocol family Dec 12 17:26:11.749146 systemd[1]: Populated /etc with preset unit settings. Dec 12 17:26:11.749176 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 17:26:11.749210 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 17:26:11.749246 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 17:26:11.750017 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 17:26:11.750079 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 17:26:11.750114 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 17:26:11.750148 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 17:26:11.750179 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 17:26:11.750695 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 17:26:11.750752 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 17:26:11.750783 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 17:26:11.750815 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:26:11.750848 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:26:11.750879 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 17:26:11.750910 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 17:26:11.750946 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 17:26:11.751246 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:26:11.751307 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 17:26:11.751345 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:26:11.751381 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:26:11.751411 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 17:26:11.751442 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 17:26:11.751489 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 17:26:11.751522 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 17:26:11.753417 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:26:11.753452 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:26:11.753483 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 12 17:26:11.753512 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:26:11.753547 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:26:11.753586 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 17:26:11.753616 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 17:26:11.753646 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 17:26:11.753690 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:26:11.753722 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 12 17:26:11.753757 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:26:11.753791 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 12 17:26:11.753825 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 12 17:26:11.753858 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:26:11.753890 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:26:11.753921 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 17:26:11.753951 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 17:26:11.753981 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 17:26:11.754014 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 17:26:11.754044 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 17:26:11.754080 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 17:26:11.754110 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 17:26:11.754141 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 17:26:11.754170 systemd[1]: Reached target machines.target - Containers. Dec 12 17:26:11.754202 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 17:26:11.754234 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:26:11.754270 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:26:11.754366 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:26:11.754401 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:26:11.754431 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:26:11.754460 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:26:11.754494 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:26:11.754523 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:26:11.754563 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 17:26:11.754595 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 17:26:11.754625 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 17:26:11.754656 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 17:26:11.754689 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 17:26:11.754723 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:26:11.754758 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:26:11.754788 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:26:11.754819 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:26:11.754850 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 17:26:11.754880 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 17:26:11.754913 kernel: ACPI: bus type drm_connector registered Dec 12 17:26:11.754943 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:26:11.754973 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 17:26:11.755002 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 17:26:11.755030 kernel: fuse: init (API version 7.41) Dec 12 17:26:11.755061 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 17:26:11.755093 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 17:26:11.755127 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 17:26:11.755156 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 17:26:11.755189 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:26:11.755342 systemd-journald[1481]: Collecting audit messages is enabled. Dec 12 17:26:11.755408 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:26:11.755439 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:26:11.755475 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:26:11.755508 systemd-journald[1481]: Journal started Dec 12 17:26:11.755559 systemd-journald[1481]: Runtime Journal (/run/log/journal/ec2db0cc917a780930b9ad9a7b928ada) is 8M, max 75.3M, 67.3M free. Dec 12 17:26:11.212000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 12 17:26:11.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:11.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:11.528000 audit: BPF prog-id=14 op=UNLOAD Dec 12 17:26:11.528000 audit: BPF prog-id=13 op=UNLOAD Dec 12 17:26:11.530000 audit: BPF prog-id=15 op=LOAD Dec 12 17:26:11.530000 audit: BPF prog-id=16 op=LOAD Dec 12 17:26:11.530000 audit: BPF prog-id=17 op=LOAD Dec 12 17:26:11.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:11.740000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 12 17:26:11.740000 audit[1481]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffe73d0700 a2=4000 a3=0 items=0 ppid=1 pid=1481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:11.758514 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:26:11.740000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 12 17:26:11.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:11.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:11.019123 systemd[1]: Queued start job for default target multi-user.target. Dec 12 17:26:11.036237 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 12 17:26:11.037620 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 17:26:11.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:11.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:11.771093 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:26:11.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:11.776235 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:26:11.776830 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:26:11.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:11.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:11.784200 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:26:11.785757 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:26:11.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:11.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:11.792592 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:26:11.792967 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:26:11.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:11.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:11.802900 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:26:11.803510 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:26:11.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:11.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:11.809735 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:26:11.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:11.816489 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:26:11.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:11.823927 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 17:26:11.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:11.830653 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 17:26:11.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:11.838575 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 17:26:11.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:11.868174 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:26:11.878882 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 12 17:26:11.888497 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 17:26:11.897652 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 17:26:11.903527 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 17:26:11.903620 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:26:11.912578 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 17:26:11.919433 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:26:11.919696 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:26:11.933636 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 17:26:11.943990 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 17:26:11.950347 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:26:11.957662 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 17:26:11.961818 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:26:11.964333 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:26:11.976127 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 17:26:11.990711 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 17:26:12.001430 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 17:26:12.006056 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 17:26:12.018582 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:26:12.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.031866 systemd-journald[1481]: Time spent on flushing to /var/log/journal/ec2db0cc917a780930b9ad9a7b928ada is 61.127ms for 1059 entries. Dec 12 17:26:12.031866 systemd-journald[1481]: System Journal (/var/log/journal/ec2db0cc917a780930b9ad9a7b928ada) is 8M, max 588.1M, 580.1M free. Dec 12 17:26:12.116602 systemd-journald[1481]: Received client request to flush runtime journal. Dec 12 17:26:12.116720 kernel: loop1: detected capacity change from 0 to 100192 Dec 12 17:26:12.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.052041 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 17:26:12.059661 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 17:26:12.072611 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 17:26:12.123482 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 17:26:12.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.135201 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:26:12.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.155668 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 17:26:12.169892 kernel: kauditd_printk_skb: 70 callbacks suppressed Dec 12 17:26:12.169992 kernel: audit: type=1130 audit(1765560372.159:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.170047 kernel: audit: type=1334 audit(1765560372.166:131): prog-id=18 op=LOAD Dec 12 17:26:12.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.166000 audit: BPF prog-id=18 op=LOAD Dec 12 17:26:12.172974 kernel: audit: type=1334 audit(1765560372.170:132): prog-id=19 op=LOAD Dec 12 17:26:12.170000 audit: BPF prog-id=19 op=LOAD Dec 12 17:26:12.172000 audit: BPF prog-id=20 op=LOAD Dec 12 17:26:12.175326 kernel: audit: type=1334 audit(1765560372.172:133): prog-id=20 op=LOAD Dec 12 17:26:12.175797 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 12 17:26:12.180000 audit: BPF prog-id=21 op=LOAD Dec 12 17:26:12.184361 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:26:12.191441 kernel: audit: type=1334 audit(1765560372.180:134): prog-id=21 op=LOAD Dec 12 17:26:12.195665 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:26:12.201793 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 17:26:12.206444 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 17:26:12.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.222351 kernel: audit: type=1130 audit(1765560372.209:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.277089 kernel: audit: type=1334 audit(1765560372.263:136): prog-id=22 op=LOAD Dec 12 17:26:12.277245 kernel: audit: type=1334 audit(1765560372.264:137): prog-id=23 op=LOAD Dec 12 17:26:12.277356 kernel: audit: type=1334 audit(1765560372.264:138): prog-id=24 op=LOAD Dec 12 17:26:12.263000 audit: BPF prog-id=22 op=LOAD Dec 12 17:26:12.264000 audit: BPF prog-id=23 op=LOAD Dec 12 17:26:12.264000 audit: BPF prog-id=24 op=LOAD Dec 12 17:26:12.267986 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 12 17:26:12.279414 kernel: audit: type=1334 audit(1765560372.275:139): prog-id=25 op=LOAD Dec 12 17:26:12.275000 audit: BPF prog-id=25 op=LOAD Dec 12 17:26:12.276000 audit: BPF prog-id=26 op=LOAD Dec 12 17:26:12.276000 audit: BPF prog-id=27 op=LOAD Dec 12 17:26:12.280746 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 17:26:12.291073 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Dec 12 17:26:12.291119 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Dec 12 17:26:12.317794 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:26:12.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.427216 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 17:26:12.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.463612 systemd-nsresourced[1559]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 12 17:26:12.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.472562 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 12 17:26:12.510341 kernel: loop2: detected capacity change from 0 to 109872 Dec 12 17:26:12.617481 systemd-oomd[1554]: No swap; memory pressure usage will be degraded Dec 12 17:26:12.619046 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 12 17:26:12.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.718108 systemd-resolved[1555]: Positive Trust Anchors: Dec 12 17:26:12.718146 systemd-resolved[1555]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:26:12.718158 systemd-resolved[1555]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 17:26:12.718222 systemd-resolved[1555]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:26:12.731801 systemd-resolved[1555]: Defaulting to hostname 'linux'. Dec 12 17:26:12.734588 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:26:12.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.737717 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:26:12.798346 kernel: loop3: detected capacity change from 0 to 61504 Dec 12 17:26:13.171343 kernel: loop4: detected capacity change from 0 to 200800 Dec 12 17:26:13.256504 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 17:26:13.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.255000 audit: BPF prog-id=8 op=UNLOAD Dec 12 17:26:13.255000 audit: BPF prog-id=7 op=UNLOAD Dec 12 17:26:13.264000 audit: BPF prog-id=28 op=LOAD Dec 12 17:26:13.264000 audit: BPF prog-id=29 op=LOAD Dec 12 17:26:13.267779 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:26:13.332555 systemd-udevd[1580]: Using default interface naming scheme 'v257'. Dec 12 17:26:13.474368 kernel: loop5: detected capacity change from 0 to 100192 Dec 12 17:26:13.494397 kernel: loop6: detected capacity change from 0 to 109872 Dec 12 17:26:13.512334 kernel: loop7: detected capacity change from 0 to 61504 Dec 12 17:26:13.527418 kernel: loop1: detected capacity change from 0 to 200800 Dec 12 17:26:13.550214 (sd-merge)[1582]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Dec 12 17:26:13.553404 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:26:13.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.559000 audit: BPF prog-id=30 op=LOAD Dec 12 17:26:13.562647 (sd-merge)[1582]: Merged extensions into '/usr'. Dec 12 17:26:13.564436 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:26:13.614685 systemd[1]: Reload requested from client PID 1539 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 17:26:13.614723 systemd[1]: Reloading... Dec 12 17:26:13.830508 systemd-networkd[1585]: lo: Link UP Dec 12 17:26:13.831061 systemd-networkd[1585]: lo: Gained carrier Dec 12 17:26:13.851955 (udev-worker)[1588]: Network interface NamePolicy= disabled on kernel command line. Dec 12 17:26:13.876325 zram_generator::config[1640]: No configuration found. Dec 12 17:26:14.015674 systemd-networkd[1585]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:26:14.015691 systemd-networkd[1585]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:26:14.026736 systemd-networkd[1585]: eth0: Link UP Dec 12 17:26:14.028933 systemd-networkd[1585]: eth0: Gained carrier Dec 12 17:26:14.029143 systemd-networkd[1585]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:26:14.041430 systemd-networkd[1585]: eth0: DHCPv4 address 172.31.16.55/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 12 17:26:14.511421 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 17:26:14.511721 systemd[1]: Reloading finished in 895 ms. Dec 12 17:26:14.541262 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:26:14.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.546479 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 17:26:14.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.574454 systemd[1]: Reached target network.target - Network. Dec 12 17:26:14.596593 systemd[1]: Starting ensure-sysext.service... Dec 12 17:26:14.609274 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 17:26:14.619816 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 17:26:14.628739 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:26:14.637013 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:26:14.645000 audit: BPF prog-id=31 op=LOAD Dec 12 17:26:14.645000 audit: BPF prog-id=25 op=UNLOAD Dec 12 17:26:14.645000 audit: BPF prog-id=32 op=LOAD Dec 12 17:26:14.645000 audit: BPF prog-id=33 op=LOAD Dec 12 17:26:14.645000 audit: BPF prog-id=26 op=UNLOAD Dec 12 17:26:14.645000 audit: BPF prog-id=27 op=UNLOAD Dec 12 17:26:14.649000 audit: BPF prog-id=34 op=LOAD Dec 12 17:26:14.649000 audit: BPF prog-id=15 op=UNLOAD Dec 12 17:26:14.649000 audit: BPF prog-id=35 op=LOAD Dec 12 17:26:14.649000 audit: BPF prog-id=36 op=LOAD Dec 12 17:26:14.649000 audit: BPF prog-id=16 op=UNLOAD Dec 12 17:26:14.651000 audit: BPF prog-id=17 op=UNLOAD Dec 12 17:26:14.652000 audit: BPF prog-id=37 op=LOAD Dec 12 17:26:14.652000 audit: BPF prog-id=30 op=UNLOAD Dec 12 17:26:14.656000 audit: BPF prog-id=38 op=LOAD Dec 12 17:26:14.656000 audit: BPF prog-id=39 op=LOAD Dec 12 17:26:14.656000 audit: BPF prog-id=28 op=UNLOAD Dec 12 17:26:14.656000 audit: BPF prog-id=29 op=UNLOAD Dec 12 17:26:14.660000 audit: BPF prog-id=40 op=LOAD Dec 12 17:26:14.660000 audit: BPF prog-id=21 op=UNLOAD Dec 12 17:26:14.666000 audit: BPF prog-id=41 op=LOAD Dec 12 17:26:14.666000 audit: BPF prog-id=22 op=UNLOAD Dec 12 17:26:14.666000 audit: BPF prog-id=42 op=LOAD Dec 12 17:26:14.666000 audit: BPF prog-id=43 op=LOAD Dec 12 17:26:14.666000 audit: BPF prog-id=23 op=UNLOAD Dec 12 17:26:14.666000 audit: BPF prog-id=24 op=UNLOAD Dec 12 17:26:14.673000 audit: BPF prog-id=44 op=LOAD Dec 12 17:26:14.673000 audit: BPF prog-id=18 op=UNLOAD Dec 12 17:26:14.673000 audit: BPF prog-id=45 op=LOAD Dec 12 17:26:14.673000 audit: BPF prog-id=46 op=LOAD Dec 12 17:26:14.673000 audit: BPF prog-id=19 op=UNLOAD Dec 12 17:26:14.673000 audit: BPF prog-id=20 op=UNLOAD Dec 12 17:26:14.700677 systemd[1]: Reload requested from client PID 1714 ('systemctl') (unit ensure-sysext.service)... Dec 12 17:26:14.700719 systemd[1]: Reloading... Dec 12 17:26:14.776707 systemd-tmpfiles[1717]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 17:26:14.776777 systemd-tmpfiles[1717]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 17:26:14.777465 systemd-tmpfiles[1717]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 17:26:14.785698 systemd-tmpfiles[1717]: ACLs are not supported, ignoring. Dec 12 17:26:14.785893 systemd-tmpfiles[1717]: ACLs are not supported, ignoring. Dec 12 17:26:14.811877 systemd-tmpfiles[1717]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:26:14.812610 systemd-tmpfiles[1717]: Skipping /boot Dec 12 17:26:14.856273 systemd-tmpfiles[1717]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:26:14.858767 systemd-tmpfiles[1717]: Skipping /boot Dec 12 17:26:14.993385 zram_generator::config[1821]: No configuration found. Dec 12 17:26:15.466693 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 12 17:26:15.470891 systemd[1]: Reloading finished in 769 ms. Dec 12 17:26:15.490000 audit: BPF prog-id=47 op=LOAD Dec 12 17:26:15.490000 audit: BPF prog-id=48 op=LOAD Dec 12 17:26:15.490000 audit: BPF prog-id=38 op=UNLOAD Dec 12 17:26:15.490000 audit: BPF prog-id=39 op=UNLOAD Dec 12 17:26:15.491000 audit: BPF prog-id=49 op=LOAD Dec 12 17:26:15.492000 audit: BPF prog-id=40 op=UNLOAD Dec 12 17:26:15.495000 audit: BPF prog-id=50 op=LOAD Dec 12 17:26:15.495000 audit: BPF prog-id=37 op=UNLOAD Dec 12 17:26:15.497000 audit: BPF prog-id=51 op=LOAD Dec 12 17:26:15.497000 audit: BPF prog-id=44 op=UNLOAD Dec 12 17:26:15.497000 audit: BPF prog-id=52 op=LOAD Dec 12 17:26:15.497000 audit: BPF prog-id=53 op=LOAD Dec 12 17:26:15.497000 audit: BPF prog-id=45 op=UNLOAD Dec 12 17:26:15.497000 audit: BPF prog-id=46 op=UNLOAD Dec 12 17:26:15.498000 audit: BPF prog-id=54 op=LOAD Dec 12 17:26:15.507000 audit: BPF prog-id=41 op=UNLOAD Dec 12 17:26:15.507000 audit: BPF prog-id=55 op=LOAD Dec 12 17:26:15.507000 audit: BPF prog-id=56 op=LOAD Dec 12 17:26:15.507000 audit: BPF prog-id=42 op=UNLOAD Dec 12 17:26:15.507000 audit: BPF prog-id=43 op=UNLOAD Dec 12 17:26:15.508000 audit: BPF prog-id=57 op=LOAD Dec 12 17:26:15.508000 audit: BPF prog-id=31 op=UNLOAD Dec 12 17:26:15.509000 audit: BPF prog-id=58 op=LOAD Dec 12 17:26:15.509000 audit: BPF prog-id=59 op=LOAD Dec 12 17:26:15.509000 audit: BPF prog-id=32 op=UNLOAD Dec 12 17:26:15.509000 audit: BPF prog-id=33 op=UNLOAD Dec 12 17:26:15.511000 audit: BPF prog-id=60 op=LOAD Dec 12 17:26:15.511000 audit: BPF prog-id=34 op=UNLOAD Dec 12 17:26:15.511000 audit: BPF prog-id=61 op=LOAD Dec 12 17:26:15.511000 audit: BPF prog-id=62 op=LOAD Dec 12 17:26:15.511000 audit: BPF prog-id=35 op=UNLOAD Dec 12 17:26:15.511000 audit: BPF prog-id=36 op=UNLOAD Dec 12 17:26:15.519410 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:26:15.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:15.527564 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:26:15.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:15.590727 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:26:15.597746 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 17:26:15.601736 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:26:15.604917 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:26:15.610795 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:26:15.618926 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:26:15.622719 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:26:15.623133 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:26:15.632867 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 17:26:15.639670 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 17:26:15.642432 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:26:15.651936 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 17:26:15.668918 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 17:26:15.679394 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 17:26:15.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:15.692452 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:26:15.693449 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:26:15.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:15.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:15.700071 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:26:15.701634 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:26:15.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:15.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:15.713089 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:26:15.714676 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:26:15.715051 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:26:15.715274 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:26:15.715508 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:26:15.728889 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:26:15.735977 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:26:15.741374 systemd-networkd[1585]: eth0: Gained IPv6LL Dec 12 17:26:15.741775 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:26:15.754617 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:26:15.757678 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:26:15.758053 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:26:15.758863 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:26:15.759259 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 17:26:15.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:15.765231 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 17:26:15.769417 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:26:15.771402 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:26:15.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:15.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:15.778393 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 17:26:15.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:15.789000 audit[1896]: SYSTEM_BOOT pid=1896 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 12 17:26:15.808453 systemd[1]: Finished ensure-sysext.service. Dec 12 17:26:15.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:15.819507 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:26:15.820080 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:26:15.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:15.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:15.836619 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 17:26:15.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:15.840326 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 17:26:15.852768 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:26:15.855426 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:26:15.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:15.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:15.858992 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:26:15.861823 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:26:15.863090 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:26:15.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:15.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:15.868964 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:26:15.894435 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 17:26:15.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:15.902020 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 17:26:15.905803 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:26:15.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:15.971000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 12 17:26:15.971000 audit[1934]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe5d88380 a2=420 a3=0 items=0 ppid=1888 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.971000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:26:15.973627 augenrules[1934]: No rules Dec 12 17:26:15.976166 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:26:15.978404 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:26:18.801835 ldconfig[1893]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 17:26:18.814495 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 17:26:18.821510 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 17:26:18.865601 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 17:26:18.869230 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:26:18.872258 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 17:26:18.875555 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 17:26:18.878910 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 17:26:18.881684 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 17:26:18.884798 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 12 17:26:18.887824 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 12 17:26:18.890360 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 17:26:18.893315 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 17:26:18.893377 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:26:18.895439 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:26:18.899444 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 17:26:18.904747 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 17:26:18.911217 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 17:26:18.914689 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 17:26:18.917726 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 17:26:18.923625 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 17:26:18.926823 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 17:26:18.930680 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 17:26:18.933716 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:26:18.936149 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:26:18.938690 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:26:18.938741 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:26:18.940959 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 17:26:18.949658 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 17:26:18.958736 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 17:26:18.968405 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 17:26:18.975593 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 17:26:18.981976 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 17:26:18.984383 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 17:26:19.011626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:26:19.020674 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 17:26:19.025329 jq[1949]: false Dec 12 17:26:19.029727 systemd[1]: Started ntpd.service - Network Time Service. Dec 12 17:26:19.036092 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 17:26:19.052256 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 17:26:19.061716 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 12 17:26:19.069619 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 17:26:19.077481 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 17:26:19.087824 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 17:26:19.093333 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 17:26:19.100791 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 17:26:19.103603 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 17:26:19.113699 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 17:26:19.132212 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 17:26:19.137884 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 17:26:19.138423 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 17:26:19.154141 extend-filesystems[1950]: Found /dev/nvme0n1p6 Dec 12 17:26:19.163517 jq[1964]: true Dec 12 17:26:19.208917 extend-filesystems[1950]: Found /dev/nvme0n1p9 Dec 12 17:26:19.211516 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 17:26:19.217980 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 17:26:19.235586 extend-filesystems[1950]: Checking size of /dev/nvme0n1p9 Dec 12 17:26:19.263360 jq[1976]: true Dec 12 17:26:19.283641 ntpd[1953]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:17 UTC 2025 (1): Starting Dec 12 17:26:19.287843 ntpd[1953]: 12 Dec 17:26:19 ntpd[1953]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:17 UTC 2025 (1): Starting Dec 12 17:26:19.287843 ntpd[1953]: 12 Dec 17:26:19 ntpd[1953]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 12 17:26:19.287843 ntpd[1953]: 12 Dec 17:26:19 ntpd[1953]: ---------------------------------------------------- Dec 12 17:26:19.287843 ntpd[1953]: 12 Dec 17:26:19 ntpd[1953]: ntp-4 is maintained by Network Time Foundation, Dec 12 17:26:19.287843 ntpd[1953]: 12 Dec 17:26:19 ntpd[1953]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 12 17:26:19.287843 ntpd[1953]: 12 Dec 17:26:19 ntpd[1953]: corporation. Support and training for ntp-4 are Dec 12 17:26:19.287843 ntpd[1953]: 12 Dec 17:26:19 ntpd[1953]: available at https://www.nwtime.org/support Dec 12 17:26:19.287843 ntpd[1953]: 12 Dec 17:26:19 ntpd[1953]: ---------------------------------------------------- Dec 12 17:26:19.283761 ntpd[1953]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 12 17:26:19.283782 ntpd[1953]: ---------------------------------------------------- Dec 12 17:26:19.283799 ntpd[1953]: ntp-4 is maintained by Network Time Foundation, Dec 12 17:26:19.283817 ntpd[1953]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 12 17:26:19.283837 ntpd[1953]: corporation. Support and training for ntp-4 are Dec 12 17:26:19.283855 ntpd[1953]: available at https://www.nwtime.org/support Dec 12 17:26:19.283872 ntpd[1953]: ---------------------------------------------------- Dec 12 17:26:19.298846 ntpd[1953]: proto: precision = 0.096 usec (-23) Dec 12 17:26:19.302609 ntpd[1953]: 12 Dec 17:26:19 ntpd[1953]: proto: precision = 0.096 usec (-23) Dec 12 17:26:19.302609 ntpd[1953]: 12 Dec 17:26:19 ntpd[1953]: basedate set to 2025-11-30 Dec 12 17:26:19.302609 ntpd[1953]: 12 Dec 17:26:19 ntpd[1953]: gps base set to 2025-11-30 (week 2395) Dec 12 17:26:19.302609 ntpd[1953]: 12 Dec 17:26:19 ntpd[1953]: Listen and drop on 0 v6wildcard [::]:123 Dec 12 17:26:19.302609 ntpd[1953]: 12 Dec 17:26:19 ntpd[1953]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 12 17:26:19.299271 ntpd[1953]: basedate set to 2025-11-30 Dec 12 17:26:19.299323 ntpd[1953]: gps base set to 2025-11-30 (week 2395) Dec 12 17:26:19.299506 ntpd[1953]: Listen and drop on 0 v6wildcard [::]:123 Dec 12 17:26:19.299552 ntpd[1953]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 12 17:26:19.309531 ntpd[1953]: Listen normally on 2 lo 127.0.0.1:123 Dec 12 17:26:19.315196 ntpd[1953]: 12 Dec 17:26:19 ntpd[1953]: Listen normally on 2 lo 127.0.0.1:123 Dec 12 17:26:19.315196 ntpd[1953]: 12 Dec 17:26:19 ntpd[1953]: Listen normally on 3 eth0 172.31.16.55:123 Dec 12 17:26:19.315196 ntpd[1953]: 12 Dec 17:26:19 ntpd[1953]: Listen normally on 4 lo [::1]:123 Dec 12 17:26:19.315196 ntpd[1953]: 12 Dec 17:26:19 ntpd[1953]: Listen normally on 5 eth0 [fe80::468:e2ff:fe53:9253%2]:123 Dec 12 17:26:19.315196 ntpd[1953]: 12 Dec 17:26:19 ntpd[1953]: Listening on routing socket on fd #22 for interface updates Dec 12 17:26:19.309601 ntpd[1953]: Listen normally on 3 eth0 172.31.16.55:123 Dec 12 17:26:19.309652 ntpd[1953]: Listen normally on 4 lo [::1]:123 Dec 12 17:26:19.309700 ntpd[1953]: Listen normally on 5 eth0 [fe80::468:e2ff:fe53:9253%2]:123 Dec 12 17:26:19.309746 ntpd[1953]: Listening on routing socket on fd #22 for interface updates Dec 12 17:26:19.350419 extend-filesystems[1950]: Resized partition /dev/nvme0n1p9 Dec 12 17:26:19.352472 dbus-daemon[1947]: [system] SELinux support is enabled Dec 12 17:26:19.352880 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 17:26:19.375149 tar[1979]: linux-arm64/LICENSE Dec 12 17:26:19.370515 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 17:26:19.370586 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 17:26:19.382985 extend-filesystems[2016]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 17:26:19.398480 tar[1979]: linux-arm64/helm Dec 12 17:26:19.376481 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 17:26:19.376521 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 17:26:19.416234 ntpd[1953]: 12 Dec 17:26:19 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 12 17:26:19.416234 ntpd[1953]: 12 Dec 17:26:19 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 12 17:26:19.414143 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 12 17:26:19.410533 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 17:26:19.414195 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 12 17:26:19.423792 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Dec 12 17:26:19.425409 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 17:26:19.438779 dbus-daemon[1947]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1585 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 12 17:26:19.447589 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 17:26:19.456583 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Dec 12 17:26:19.457909 update_engine[1963]: I20251212 17:26:19.456166 1963 main.cc:92] Flatcar Update Engine starting Dec 12 17:26:19.471937 coreos-metadata[1946]: Dec 12 17:26:19.457 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 12 17:26:19.471937 coreos-metadata[1946]: Dec 12 17:26:19.459 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 12 17:26:19.471937 coreos-metadata[1946]: Dec 12 17:26:19.463 INFO Fetch successful Dec 12 17:26:19.471937 coreos-metadata[1946]: Dec 12 17:26:19.463 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 12 17:26:19.472892 extend-filesystems[2016]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 12 17:26:19.472892 extend-filesystems[2016]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 12 17:26:19.472892 extend-filesystems[2016]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Dec 12 17:26:19.483794 extend-filesystems[1950]: Resized filesystem in /dev/nvme0n1p9 Dec 12 17:26:19.495249 coreos-metadata[1946]: Dec 12 17:26:19.487 INFO Fetch successful Dec 12 17:26:19.495249 coreos-metadata[1946]: Dec 12 17:26:19.487 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 12 17:26:19.488549 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 12 17:26:19.511965 coreos-metadata[1946]: Dec 12 17:26:19.508 INFO Fetch successful Dec 12 17:26:19.511965 coreos-metadata[1946]: Dec 12 17:26:19.508 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 12 17:26:19.511965 coreos-metadata[1946]: Dec 12 17:26:19.511 INFO Fetch successful Dec 12 17:26:19.511965 coreos-metadata[1946]: Dec 12 17:26:19.511 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 12 17:26:19.512213 update_engine[1963]: I20251212 17:26:19.503542 1963 update_check_scheduler.cc:74] Next update check in 11m24s Dec 12 17:26:19.494104 systemd[1]: Started update-engine.service - Update Engine. Dec 12 17:26:19.497563 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 17:26:19.516036 coreos-metadata[1946]: Dec 12 17:26:19.512 INFO Fetch failed with 404: resource not found Dec 12 17:26:19.516036 coreos-metadata[1946]: Dec 12 17:26:19.512 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 12 17:26:19.516036 coreos-metadata[1946]: Dec 12 17:26:19.515 INFO Fetch successful Dec 12 17:26:19.516036 coreos-metadata[1946]: Dec 12 17:26:19.515 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 12 17:26:19.498054 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 17:26:19.524262 coreos-metadata[1946]: Dec 12 17:26:19.516 INFO Fetch successful Dec 12 17:26:19.524262 coreos-metadata[1946]: Dec 12 17:26:19.516 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 12 17:26:19.524262 coreos-metadata[1946]: Dec 12 17:26:19.517 INFO Fetch successful Dec 12 17:26:19.524262 coreos-metadata[1946]: Dec 12 17:26:19.517 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 12 17:26:19.524538 coreos-metadata[1946]: Dec 12 17:26:19.524 INFO Fetch successful Dec 12 17:26:19.524538 coreos-metadata[1946]: Dec 12 17:26:19.524 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 12 17:26:19.525686 coreos-metadata[1946]: Dec 12 17:26:19.525 INFO Fetch successful Dec 12 17:26:19.530946 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 17:26:19.637047 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 12 17:26:19.647056 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 12 17:26:19.769414 bash[2056]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:26:19.804318 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 17:26:19.822751 systemd[1]: Starting sshkeys.service... Dec 12 17:26:19.919479 systemd-logind[1962]: Watching system buttons on /dev/input/event0 (Power Button) Dec 12 17:26:19.919548 systemd-logind[1962]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 12 17:26:19.921971 systemd-logind[1962]: New seat seat0. Dec 12 17:26:19.926012 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 17:26:19.929077 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 17:26:19.932448 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 17:26:19.963979 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 17:26:19.976883 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 17:26:20.031582 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 12 17:26:20.043376 dbus-daemon[1947]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 12 17:26:20.052951 dbus-daemon[1947]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2021 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 12 17:26:20.070867 systemd[1]: Starting polkit.service - Authorization Manager... Dec 12 17:26:20.249463 coreos-metadata[2115]: Dec 12 17:26:20.247 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 12 17:26:20.251375 coreos-metadata[2115]: Dec 12 17:26:20.251 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 12 17:26:20.252327 coreos-metadata[2115]: Dec 12 17:26:20.252 INFO Fetch successful Dec 12 17:26:20.254381 coreos-metadata[2115]: Dec 12 17:26:20.254 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 12 17:26:20.261307 coreos-metadata[2115]: Dec 12 17:26:20.259 INFO Fetch successful Dec 12 17:26:20.269095 unknown[2115]: wrote ssh authorized keys file for user: core Dec 12 17:26:20.325127 containerd[2012]: time="2025-12-12T17:26:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 17:26:20.331346 containerd[2012]: time="2025-12-12T17:26:20.329258829Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 12 17:26:20.344122 amazon-ssm-agent[2055]: Initializing new seelog logger Dec 12 17:26:20.348964 amazon-ssm-agent[2055]: New Seelog Logger Creation Complete Dec 12 17:26:20.348964 amazon-ssm-agent[2055]: 2025/12/12 17:26:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:26:20.348964 amazon-ssm-agent[2055]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:26:20.353312 amazon-ssm-agent[2055]: 2025/12/12 17:26:20 processing appconfig overrides Dec 12 17:26:20.353312 amazon-ssm-agent[2055]: 2025/12/12 17:26:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:26:20.353312 amazon-ssm-agent[2055]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:26:20.353312 amazon-ssm-agent[2055]: 2025/12/12 17:26:20 processing appconfig overrides Dec 12 17:26:20.353312 amazon-ssm-agent[2055]: 2025/12/12 17:26:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:26:20.353312 amazon-ssm-agent[2055]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:26:20.353312 amazon-ssm-agent[2055]: 2025/12/12 17:26:20 processing appconfig overrides Dec 12 17:26:20.357310 amazon-ssm-agent[2055]: 2025-12-12 17:26:20.3525 INFO Proxy environment variables: Dec 12 17:26:20.367316 amazon-ssm-agent[2055]: 2025/12/12 17:26:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:26:20.367316 amazon-ssm-agent[2055]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:26:20.367316 amazon-ssm-agent[2055]: 2025/12/12 17:26:20 processing appconfig overrides Dec 12 17:26:20.414344 update-ssh-keys[2148]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:26:20.419212 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 17:26:20.426333 containerd[2012]: time="2025-12-12T17:26:20.419189037Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.896µs" Dec 12 17:26:20.426333 containerd[2012]: time="2025-12-12T17:26:20.419239629Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 17:26:20.426333 containerd[2012]: time="2025-12-12T17:26:20.419332365Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 17:26:20.426333 containerd[2012]: time="2025-12-12T17:26:20.419365401Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 17:26:20.426333 containerd[2012]: time="2025-12-12T17:26:20.423398373Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 17:26:20.426333 containerd[2012]: time="2025-12-12T17:26:20.423459261Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:26:20.426333 containerd[2012]: time="2025-12-12T17:26:20.423584949Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:26:20.426333 containerd[2012]: time="2025-12-12T17:26:20.423610545Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:26:20.429437 systemd[1]: Finished sshkeys.service. Dec 12 17:26:20.431721 containerd[2012]: time="2025-12-12T17:26:20.426669645Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:26:20.431721 containerd[2012]: time="2025-12-12T17:26:20.427368585Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:26:20.431721 containerd[2012]: time="2025-12-12T17:26:20.427410525Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:26:20.431721 containerd[2012]: time="2025-12-12T17:26:20.427432845Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 17:26:20.431721 containerd[2012]: time="2025-12-12T17:26:20.427789377Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 17:26:20.431721 containerd[2012]: time="2025-12-12T17:26:20.428558853Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 17:26:20.431721 containerd[2012]: time="2025-12-12T17:26:20.428764065Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 17:26:20.438370 containerd[2012]: time="2025-12-12T17:26:20.431617365Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:26:20.438370 containerd[2012]: time="2025-12-12T17:26:20.432371973Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:26:20.438370 containerd[2012]: time="2025-12-12T17:26:20.432400833Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 17:26:20.438370 containerd[2012]: time="2025-12-12T17:26:20.432478929Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 17:26:20.438370 containerd[2012]: time="2025-12-12T17:26:20.432874653Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 17:26:20.438370 containerd[2012]: time="2025-12-12T17:26:20.434607669Z" level=info msg="metadata content store policy set" policy=shared Dec 12 17:26:20.444926 containerd[2012]: time="2025-12-12T17:26:20.444841341Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 17:26:20.445083 containerd[2012]: time="2025-12-12T17:26:20.444954285Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 17:26:20.445159 containerd[2012]: time="2025-12-12T17:26:20.445124781Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 17:26:20.445222 containerd[2012]: time="2025-12-12T17:26:20.445155813Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 17:26:20.445222 containerd[2012]: time="2025-12-12T17:26:20.445186053Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 17:26:20.445361 containerd[2012]: time="2025-12-12T17:26:20.445214337Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 17:26:20.445361 containerd[2012]: time="2025-12-12T17:26:20.445244421Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 17:26:20.445361 containerd[2012]: time="2025-12-12T17:26:20.445303161Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 17:26:20.445361 containerd[2012]: time="2025-12-12T17:26:20.445336437Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 17:26:20.445361 containerd[2012]: time="2025-12-12T17:26:20.445365225Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 17:26:20.445361 containerd[2012]: time="2025-12-12T17:26:20.445394397Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 17:26:20.445361 containerd[2012]: time="2025-12-12T17:26:20.445427193Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 17:26:20.445361 containerd[2012]: time="2025-12-12T17:26:20.445457649Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 17:26:20.445361 containerd[2012]: time="2025-12-12T17:26:20.445494081Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 17:26:20.445361 containerd[2012]: time="2025-12-12T17:26:20.445724013Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 17:26:20.445361 containerd[2012]: time="2025-12-12T17:26:20.445763409Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 17:26:20.445361 containerd[2012]: time="2025-12-12T17:26:20.445796505Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 17:26:20.445361 containerd[2012]: time="2025-12-12T17:26:20.445829913Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 17:26:20.445361 containerd[2012]: time="2025-12-12T17:26:20.445858053Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 17:26:20.448932 containerd[2012]: time="2025-12-12T17:26:20.445885125Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 17:26:20.448932 containerd[2012]: time="2025-12-12T17:26:20.445913601Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 17:26:20.448932 containerd[2012]: time="2025-12-12T17:26:20.445947969Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 17:26:20.448932 containerd[2012]: time="2025-12-12T17:26:20.445976781Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 17:26:20.448932 containerd[2012]: time="2025-12-12T17:26:20.446002845Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 17:26:20.448932 containerd[2012]: time="2025-12-12T17:26:20.446030157Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 17:26:20.448932 containerd[2012]: time="2025-12-12T17:26:20.446087133Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 17:26:20.448932 containerd[2012]: time="2025-12-12T17:26:20.446153541Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 17:26:20.448932 containerd[2012]: time="2025-12-12T17:26:20.446187945Z" level=info msg="Start snapshots syncer" Dec 12 17:26:20.448932 containerd[2012]: time="2025-12-12T17:26:20.446252865Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 17:26:20.455786 containerd[2012]: time="2025-12-12T17:26:20.455241345Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 17:26:20.455786 containerd[2012]: time="2025-12-12T17:26:20.455388561Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 17:26:20.456042 containerd[2012]: time="2025-12-12T17:26:20.455515809Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 17:26:20.456042 containerd[2012]: time="2025-12-12T17:26:20.455758965Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 17:26:20.456042 containerd[2012]: time="2025-12-12T17:26:20.455816529Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 17:26:20.456042 containerd[2012]: time="2025-12-12T17:26:20.455845389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 17:26:20.456042 containerd[2012]: time="2025-12-12T17:26:20.455872137Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 17:26:20.456042 containerd[2012]: time="2025-12-12T17:26:20.455901849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 17:26:20.456042 containerd[2012]: time="2025-12-12T17:26:20.455932833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 17:26:20.456042 containerd[2012]: time="2025-12-12T17:26:20.455965941Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 17:26:20.456042 containerd[2012]: time="2025-12-12T17:26:20.455993925Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 17:26:20.456042 containerd[2012]: time="2025-12-12T17:26:20.456022857Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 17:26:20.456865 containerd[2012]: time="2025-12-12T17:26:20.456112377Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:26:20.456865 containerd[2012]: time="2025-12-12T17:26:20.456146037Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:26:20.456865 containerd[2012]: time="2025-12-12T17:26:20.456171909Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:26:20.456865 containerd[2012]: time="2025-12-12T17:26:20.456196617Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:26:20.456865 containerd[2012]: time="2025-12-12T17:26:20.456218445Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 17:26:20.456865 containerd[2012]: time="2025-12-12T17:26:20.456245253Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 17:26:20.458900 containerd[2012]: time="2025-12-12T17:26:20.456271881Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 17:26:20.458900 containerd[2012]: time="2025-12-12T17:26:20.458396457Z" level=info msg="runtime interface created" Dec 12 17:26:20.458900 containerd[2012]: time="2025-12-12T17:26:20.458419161Z" level=info msg="created NRI interface" Dec 12 17:26:20.458900 containerd[2012]: time="2025-12-12T17:26:20.458448933Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 17:26:20.458900 containerd[2012]: time="2025-12-12T17:26:20.458482725Z" level=info msg="Connect containerd service" Dec 12 17:26:20.458900 containerd[2012]: time="2025-12-12T17:26:20.458541717Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 17:26:20.460308 amazon-ssm-agent[2055]: 2025-12-12 17:26:20.3526 INFO https_proxy: Dec 12 17:26:20.460462 containerd[2012]: time="2025-12-12T17:26:20.459839193Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:26:20.566389 amazon-ssm-agent[2055]: 2025-12-12 17:26:20.3526 INFO http_proxy: Dec 12 17:26:20.660548 amazon-ssm-agent[2055]: 2025-12-12 17:26:20.3526 INFO no_proxy: Dec 12 17:26:20.761504 amazon-ssm-agent[2055]: 2025-12-12 17:26:20.3528 INFO Checking if agent identity type OnPrem can be assumed Dec 12 17:26:20.766966 locksmithd[2025]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 17:26:20.865363 amazon-ssm-agent[2055]: 2025-12-12 17:26:20.3528 INFO Checking if agent identity type EC2 can be assumed Dec 12 17:26:20.962768 amazon-ssm-agent[2055]: 2025-12-12 17:26:20.7743 INFO Agent will take identity from EC2 Dec 12 17:26:20.966581 polkitd[2121]: Started polkitd version 126 Dec 12 17:26:20.996531 polkitd[2121]: Loading rules from directory /etc/polkit-1/rules.d Dec 12 17:26:20.997160 polkitd[2121]: Loading rules from directory /run/polkit-1/rules.d Dec 12 17:26:20.997236 polkitd[2121]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 17:26:20.997885 polkitd[2121]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 12 17:26:20.997935 polkitd[2121]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 17:26:20.998018 polkitd[2121]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 12 17:26:21.003757 polkitd[2121]: Finished loading, compiling and executing 2 rules Dec 12 17:26:21.006705 systemd[1]: Started polkit.service - Authorization Manager. Dec 12 17:26:21.015603 dbus-daemon[1947]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 12 17:26:21.016397 polkitd[2121]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 12 17:26:21.046685 containerd[2012]: time="2025-12-12T17:26:21.045829292Z" level=info msg="Start subscribing containerd event" Dec 12 17:26:21.046685 containerd[2012]: time="2025-12-12T17:26:21.045922196Z" level=info msg="Start recovering state" Dec 12 17:26:21.046685 containerd[2012]: time="2025-12-12T17:26:21.046075904Z" level=info msg="Start event monitor" Dec 12 17:26:21.046685 containerd[2012]: time="2025-12-12T17:26:21.046099880Z" level=info msg="Start cni network conf syncer for default" Dec 12 17:26:21.046685 containerd[2012]: time="2025-12-12T17:26:21.046120196Z" level=info msg="Start streaming server" Dec 12 17:26:21.046685 containerd[2012]: time="2025-12-12T17:26:21.046141508Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 17:26:21.046685 containerd[2012]: time="2025-12-12T17:26:21.046159220Z" level=info msg="runtime interface starting up..." Dec 12 17:26:21.046685 containerd[2012]: time="2025-12-12T17:26:21.046173992Z" level=info msg="starting plugins..." Dec 12 17:26:21.046685 containerd[2012]: time="2025-12-12T17:26:21.046202864Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 17:26:21.050699 containerd[2012]: time="2025-12-12T17:26:21.049157432Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 17:26:21.050699 containerd[2012]: time="2025-12-12T17:26:21.049368692Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 17:26:21.050699 containerd[2012]: time="2025-12-12T17:26:21.049553756Z" level=info msg="containerd successfully booted in 0.725201s" Dec 12 17:26:21.050479 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 17:26:21.063615 amazon-ssm-agent[2055]: 2025-12-12 17:26:20.7782 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Dec 12 17:26:21.064185 systemd-resolved[1555]: System hostname changed to 'ip-172-31-16-55'. Dec 12 17:26:21.064260 systemd-hostnamed[2021]: Hostname set to (transient) Dec 12 17:26:21.067653 amazon-ssm-agent[2055]: 2025/12/12 17:26:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:26:21.067653 amazon-ssm-agent[2055]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:26:21.067807 amazon-ssm-agent[2055]: 2025/12/12 17:26:21 processing appconfig overrides Dec 12 17:26:21.111303 amazon-ssm-agent[2055]: 2025-12-12 17:26:20.7782 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Dec 12 17:26:21.111303 amazon-ssm-agent[2055]: 2025-12-12 17:26:20.7782 INFO [amazon-ssm-agent] Starting Core Agent Dec 12 17:26:21.111303 amazon-ssm-agent[2055]: 2025-12-12 17:26:20.7782 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Dec 12 17:26:21.111303 amazon-ssm-agent[2055]: 2025-12-12 17:26:20.7813 INFO [Registrar] Starting registrar module Dec 12 17:26:21.111527 amazon-ssm-agent[2055]: 2025-12-12 17:26:20.7870 INFO [EC2Identity] Checking disk for registration info Dec 12 17:26:21.111527 amazon-ssm-agent[2055]: 2025-12-12 17:26:20.7871 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Dec 12 17:26:21.111527 amazon-ssm-agent[2055]: 2025-12-12 17:26:20.7872 INFO [EC2Identity] Generating registration keypair Dec 12 17:26:21.111527 amazon-ssm-agent[2055]: 2025-12-12 17:26:21.0176 INFO [EC2Identity] Checking write access before registering Dec 12 17:26:21.111527 amazon-ssm-agent[2055]: 2025-12-12 17:26:21.0184 INFO [EC2Identity] Registering EC2 instance with Systems Manager Dec 12 17:26:21.111527 amazon-ssm-agent[2055]: 2025-12-12 17:26:21.0673 INFO [EC2Identity] EC2 registration was successful. Dec 12 17:26:21.111527 amazon-ssm-agent[2055]: 2025-12-12 17:26:21.0673 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Dec 12 17:26:21.111527 amazon-ssm-agent[2055]: 2025-12-12 17:26:21.0675 INFO [CredentialRefresher] credentialRefresher has started Dec 12 17:26:21.111527 amazon-ssm-agent[2055]: 2025-12-12 17:26:21.0675 INFO [CredentialRefresher] Starting credentials refresher loop Dec 12 17:26:21.111527 amazon-ssm-agent[2055]: 2025-12-12 17:26:21.1108 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 12 17:26:21.111527 amazon-ssm-agent[2055]: 2025-12-12 17:26:21.1111 INFO [CredentialRefresher] Credentials ready Dec 12 17:26:21.162823 amazon-ssm-agent[2055]: 2025-12-12 17:26:21.1115 INFO [CredentialRefresher] Next credential rotation will be in 29.999989927 minutes Dec 12 17:26:21.432392 tar[1979]: linux-arm64/README.md Dec 12 17:26:21.457685 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 17:26:21.760640 sshd_keygen[1996]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 17:26:21.801396 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 17:26:21.808892 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 17:26:21.854062 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 17:26:21.856378 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 17:26:21.862666 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 17:26:21.897086 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 17:26:21.903899 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 17:26:21.909857 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 17:26:21.912814 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 17:26:22.137350 amazon-ssm-agent[2055]: 2025-12-12 17:26:22.1371 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 12 17:26:22.238274 amazon-ssm-agent[2055]: 2025-12-12 17:26:22.1427 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2228) started Dec 12 17:26:22.338451 amazon-ssm-agent[2055]: 2025-12-12 17:26:22.1427 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 12 17:26:23.565942 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:23.570245 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 17:26:23.577469 systemd[1]: Startup finished in 4.194s (kernel) + 12.377s (initrd) + 15.296s (userspace) = 31.868s. Dec 12 17:26:23.583671 (kubelet)[2244]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:26:24.237204 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 17:26:24.239917 systemd[1]: Started sshd@0-172.31.16.55:22-139.178.68.195:39106.service - OpenSSH per-connection server daemon (139.178.68.195:39106). Dec 12 17:26:24.643015 sshd[2254]: Accepted publickey for core from 139.178.68.195 port 39106 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:26:24.647736 sshd-session[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:26:24.670044 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 17:26:24.674542 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 17:26:24.691717 systemd-logind[1962]: New session 1 of user core. Dec 12 17:26:24.714496 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 17:26:24.721762 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 17:26:24.740075 (systemd)[2260]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 17:26:24.746845 systemd-logind[1962]: New session c1 of user core. Dec 12 17:26:25.042064 systemd[2260]: Queued start job for default target default.target. Dec 12 17:26:25.051834 systemd[2260]: Created slice app.slice - User Application Slice. Dec 12 17:26:25.051910 systemd[2260]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 12 17:26:25.051942 systemd[2260]: Reached target paths.target - Paths. Dec 12 17:26:25.052045 systemd[2260]: Reached target timers.target - Timers. Dec 12 17:26:25.058488 systemd[2260]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 17:26:25.060167 systemd[2260]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 12 17:26:25.087114 systemd[2260]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 17:26:25.087256 systemd[2260]: Reached target sockets.target - Sockets. Dec 12 17:26:25.092229 systemd[2260]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 12 17:26:25.092523 systemd[2260]: Reached target basic.target - Basic System. Dec 12 17:26:25.092634 systemd[2260]: Reached target default.target - Main User Target. Dec 12 17:26:25.092696 systemd[2260]: Startup finished in 330ms. Dec 12 17:26:25.093396 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 17:26:25.106619 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 17:26:25.204384 systemd[1]: Started sshd@1-172.31.16.55:22-139.178.68.195:39114.service - OpenSSH per-connection server daemon (139.178.68.195:39114). Dec 12 17:26:25.387438 kubelet[2244]: E1212 17:26:25.385877 2244 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:26:25.390625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:26:25.390942 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:26:25.392490 systemd[1]: kubelet.service: Consumed 1.364s CPU time, 250.8M memory peak. Dec 12 17:26:25.416358 sshd[2273]: Accepted publickey for core from 139.178.68.195 port 39114 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:26:25.418916 sshd-session[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:26:25.427844 systemd-logind[1962]: New session 2 of user core. Dec 12 17:26:25.435599 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 17:26:25.501342 sshd[2278]: Connection closed by 139.178.68.195 port 39114 Dec 12 17:26:25.502220 sshd-session[2273]: pam_unix(sshd:session): session closed for user core Dec 12 17:26:25.509313 systemd[1]: sshd@1-172.31.16.55:22-139.178.68.195:39114.service: Deactivated successfully. Dec 12 17:26:25.513964 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 17:26:25.515919 systemd-logind[1962]: Session 2 logged out. Waiting for processes to exit. Dec 12 17:26:25.519205 systemd-logind[1962]: Removed session 2. Dec 12 17:26:25.542710 systemd[1]: Started sshd@2-172.31.16.55:22-139.178.68.195:39120.service - OpenSSH per-connection server daemon (139.178.68.195:39120). Dec 12 17:26:25.731570 sshd[2284]: Accepted publickey for core from 139.178.68.195 port 39120 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:26:25.734096 sshd-session[2284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:26:25.742899 systemd-logind[1962]: New session 3 of user core. Dec 12 17:26:25.751598 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 17:26:25.810633 sshd[2287]: Connection closed by 139.178.68.195 port 39120 Dec 12 17:26:25.811088 sshd-session[2284]: pam_unix(sshd:session): session closed for user core Dec 12 17:26:25.818703 systemd[1]: sshd@2-172.31.16.55:22-139.178.68.195:39120.service: Deactivated successfully. Dec 12 17:26:25.823684 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 17:26:25.827376 systemd-logind[1962]: Session 3 logged out. Waiting for processes to exit. Dec 12 17:26:25.829357 systemd-logind[1962]: Removed session 3. Dec 12 17:26:25.848878 systemd[1]: Started sshd@3-172.31.16.55:22-139.178.68.195:39134.service - OpenSSH per-connection server daemon (139.178.68.195:39134). Dec 12 17:26:26.044178 sshd[2293]: Accepted publickey for core from 139.178.68.195 port 39134 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:26:26.047062 sshd-session[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:26:26.055496 systemd-logind[1962]: New session 4 of user core. Dec 12 17:26:26.064608 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 17:26:26.130356 sshd[2296]: Connection closed by 139.178.68.195 port 39134 Dec 12 17:26:26.131147 sshd-session[2293]: pam_unix(sshd:session): session closed for user core Dec 12 17:26:26.138583 systemd[1]: sshd@3-172.31.16.55:22-139.178.68.195:39134.service: Deactivated successfully. Dec 12 17:26:26.142019 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 17:26:26.144876 systemd-logind[1962]: Session 4 logged out. Waiting for processes to exit. Dec 12 17:26:26.147716 systemd-logind[1962]: Removed session 4. Dec 12 17:26:26.170664 systemd[1]: Started sshd@4-172.31.16.55:22-139.178.68.195:39140.service - OpenSSH per-connection server daemon (139.178.68.195:39140). Dec 12 17:26:26.738644 systemd-resolved[1555]: Clock change detected. Flushing caches. Dec 12 17:26:26.806306 sshd[2302]: Accepted publickey for core from 139.178.68.195 port 39140 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:26:26.808601 sshd-session[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:26:26.817273 systemd-logind[1962]: New session 5 of user core. Dec 12 17:26:26.826148 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 17:26:26.945047 sudo[2306]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 17:26:26.945645 sudo[2306]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:26:26.964794 sudo[2306]: pam_unix(sudo:session): session closed for user root Dec 12 17:26:26.989456 sshd[2305]: Connection closed by 139.178.68.195 port 39140 Dec 12 17:26:26.989281 sshd-session[2302]: pam_unix(sshd:session): session closed for user core Dec 12 17:26:26.996929 systemd[1]: sshd@4-172.31.16.55:22-139.178.68.195:39140.service: Deactivated successfully. Dec 12 17:26:27.000808 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 17:26:27.006727 systemd-logind[1962]: Session 5 logged out. Waiting for processes to exit. Dec 12 17:26:27.008633 systemd-logind[1962]: Removed session 5. Dec 12 17:26:27.025143 systemd[1]: Started sshd@5-172.31.16.55:22-139.178.68.195:39148.service - OpenSSH per-connection server daemon (139.178.68.195:39148). Dec 12 17:26:27.216303 sshd[2312]: Accepted publickey for core from 139.178.68.195 port 39148 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:26:27.219274 sshd-session[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:26:27.229963 systemd-logind[1962]: New session 6 of user core. Dec 12 17:26:27.236302 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 17:26:27.284667 sudo[2317]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 17:26:27.285319 sudo[2317]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:26:27.295338 sudo[2317]: pam_unix(sudo:session): session closed for user root Dec 12 17:26:27.307460 sudo[2316]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 17:26:27.308725 sudo[2316]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:26:27.326557 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:26:27.391809 kernel: kauditd_printk_skb: 105 callbacks suppressed Dec 12 17:26:27.391925 kernel: audit: type=1305 audit(1765560387.387:243): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 17:26:27.387000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 17:26:27.392087 augenrules[2339]: No rules Dec 12 17:26:27.393242 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:26:27.387000 audit[2339]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc02ff040 a2=420 a3=0 items=0 ppid=2320 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:27.393720 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:26:27.400413 kernel: audit: type=1300 audit(1765560387.387:243): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc02ff040 a2=420 a3=0 items=0 ppid=2320 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:27.400806 sudo[2316]: pam_unix(sudo:session): session closed for user root Dec 12 17:26:27.387000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:26:27.404445 kernel: audit: type=1327 audit(1765560387.387:243): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:26:27.404531 kernel: audit: type=1130 audit(1765560387.392:244): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:27.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:27.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:27.413722 kernel: audit: type=1131 audit(1765560387.392:245): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:27.413801 kernel: audit: type=1106 audit(1765560387.400:246): pid=2316 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:27.400000 audit[2316]: USER_END pid=2316 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:27.400000 audit[2316]: CRED_DISP pid=2316 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:27.424317 kernel: audit: type=1104 audit(1765560387.400:247): pid=2316 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:27.426899 sshd[2315]: Connection closed by 139.178.68.195 port 39148 Dec 12 17:26:27.425961 sshd-session[2312]: pam_unix(sshd:session): session closed for user core Dec 12 17:26:27.428000 audit[2312]: USER_END pid=2312 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:26:27.434808 systemd[1]: sshd@5-172.31.16.55:22-139.178.68.195:39148.service: Deactivated successfully. Dec 12 17:26:27.428000 audit[2312]: CRED_DISP pid=2312 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:26:27.438676 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 17:26:27.442011 kernel: audit: type=1106 audit(1765560387.428:248): pid=2312 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:26:27.442095 kernel: audit: type=1104 audit(1765560387.428:249): pid=2312 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:26:27.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.16.55:22-139.178.68.195:39148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:27.448143 kernel: audit: type=1131 audit(1765560387.433:250): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.16.55:22-139.178.68.195:39148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:27.449046 systemd-logind[1962]: Session 6 logged out. Waiting for processes to exit. Dec 12 17:26:27.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.16.55:22-139.178.68.195:39150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:27.465308 systemd[1]: Started sshd@6-172.31.16.55:22-139.178.68.195:39150.service - OpenSSH per-connection server daemon (139.178.68.195:39150). Dec 12 17:26:27.467511 systemd-logind[1962]: Removed session 6. Dec 12 17:26:27.648000 audit[2348]: USER_ACCT pid=2348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:26:27.649499 sshd[2348]: Accepted publickey for core from 139.178.68.195 port 39150 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:26:27.650000 audit[2348]: CRED_ACQ pid=2348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:26:27.650000 audit[2348]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdc6cf4a0 a2=3 a3=0 items=0 ppid=1 pid=2348 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:27.650000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:26:27.651714 sshd-session[2348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:26:27.660142 systemd-logind[1962]: New session 7 of user core. Dec 12 17:26:27.670187 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 17:26:27.674000 audit[2348]: USER_START pid=2348 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:26:27.678000 audit[2351]: CRED_ACQ pid=2351 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:26:27.716439 sudo[2352]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 17:26:27.715000 audit[2352]: USER_ACCT pid=2352 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:27.715000 audit[2352]: CRED_REFR pid=2352 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:27.717123 sudo[2352]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:26:27.720000 audit[2352]: USER_START pid=2352 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:29.002325 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 17:26:29.018611 (dockerd)[2369]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 17:26:30.133811 dockerd[2369]: time="2025-12-12T17:26:30.133548796Z" level=info msg="Starting up" Dec 12 17:26:30.138625 dockerd[2369]: time="2025-12-12T17:26:30.138552712Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 17:26:30.161671 dockerd[2369]: time="2025-12-12T17:26:30.161499497Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 17:26:30.242290 dockerd[2369]: time="2025-12-12T17:26:30.242242157Z" level=info msg="Loading containers: start." Dec 12 17:26:30.257956 kernel: Initializing XFRM netlink socket Dec 12 17:26:30.472000 audit[2418]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2418 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:30.472000 audit[2418]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffcd2b91e0 a2=0 a3=0 items=0 ppid=2369 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.472000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 17:26:30.477000 audit[2420]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2420 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:30.477000 audit[2420]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd6f3e450 a2=0 a3=0 items=0 ppid=2369 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.477000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 17:26:30.480000 audit[2422]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2422 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:30.480000 audit[2422]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcebc1f60 a2=0 a3=0 items=0 ppid=2369 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.480000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 17:26:30.485000 audit[2424]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2424 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:30.485000 audit[2424]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc0287370 a2=0 a3=0 items=0 ppid=2369 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.485000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 17:26:30.489000 audit[2426]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2426 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:30.489000 audit[2426]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd46145a0 a2=0 a3=0 items=0 ppid=2369 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.489000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 17:26:30.494000 audit[2428]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:30.494000 audit[2428]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd35b81d0 a2=0 a3=0 items=0 ppid=2369 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.494000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:26:30.498000 audit[2430]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2430 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:30.498000 audit[2430]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc2767160 a2=0 a3=0 items=0 ppid=2369 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.498000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:26:30.503000 audit[2432]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2432 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:30.503000 audit[2432]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffff0bb8360 a2=0 a3=0 items=0 ppid=2369 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.503000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 17:26:30.584000 audit[2435]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2435 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:30.584000 audit[2435]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffcaef9840 a2=0 a3=0 items=0 ppid=2369 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.584000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 12 17:26:30.588000 audit[2437]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2437 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:30.588000 audit[2437]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffde4e6980 a2=0 a3=0 items=0 ppid=2369 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.588000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 17:26:30.592000 audit[2439]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2439 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:30.592000 audit[2439]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffc1e70d00 a2=0 a3=0 items=0 ppid=2369 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.592000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 17:26:30.597000 audit[2441]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2441 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:30.597000 audit[2441]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffd8682190 a2=0 a3=0 items=0 ppid=2369 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.597000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:26:30.602000 audit[2443]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:30.602000 audit[2443]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffdf5741e0 a2=0 a3=0 items=0 ppid=2369 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.602000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 17:26:30.672000 audit[2473]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:30.672000 audit[2473]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffd0236820 a2=0 a3=0 items=0 ppid=2369 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.672000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 17:26:30.676000 audit[2475]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:30.676000 audit[2475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffcc494180 a2=0 a3=0 items=0 ppid=2369 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.676000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 17:26:30.680000 audit[2477]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:30.680000 audit[2477]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffccc95520 a2=0 a3=0 items=0 ppid=2369 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.680000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 17:26:30.685000 audit[2479]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:30.685000 audit[2479]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffec460c00 a2=0 a3=0 items=0 ppid=2369 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.685000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 17:26:30.689000 audit[2481]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:30.689000 audit[2481]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe7eeb770 a2=0 a3=0 items=0 ppid=2369 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.689000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 17:26:30.694000 audit[2483]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:30.694000 audit[2483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd5aae350 a2=0 a3=0 items=0 ppid=2369 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.694000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:26:30.698000 audit[2485]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2485 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:30.698000 audit[2485]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd7952e20 a2=0 a3=0 items=0 ppid=2369 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.698000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:26:30.703000 audit[2487]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2487 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:30.703000 audit[2487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffe37b9d00 a2=0 a3=0 items=0 ppid=2369 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.703000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 17:26:30.708000 audit[2489]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:30.708000 audit[2489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffcf73c9e0 a2=0 a3=0 items=0 ppid=2369 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.708000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 12 17:26:30.713000 audit[2491]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:30.713000 audit[2491]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd92f9730 a2=0 a3=0 items=0 ppid=2369 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.713000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 17:26:30.717000 audit[2493]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2493 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:30.717000 audit[2493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffc0f6b370 a2=0 a3=0 items=0 ppid=2369 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.717000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 17:26:30.721000 audit[2495]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2495 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:30.721000 audit[2495]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffe2c3ac70 a2=0 a3=0 items=0 ppid=2369 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.721000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:26:30.726000 audit[2497]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2497 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:30.726000 audit[2497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffcdb74ff0 a2=0 a3=0 items=0 ppid=2369 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.726000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 17:26:30.738000 audit[2502]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2502 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:30.738000 audit[2502]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff8458110 a2=0 a3=0 items=0 ppid=2369 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.738000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 17:26:30.743000 audit[2504]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2504 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:30.743000 audit[2504]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd43ba9b0 a2=0 a3=0 items=0 ppid=2369 pid=2504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.743000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 17:26:30.748000 audit[2506]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2506 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:30.748000 audit[2506]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc2c1b6f0 a2=0 a3=0 items=0 ppid=2369 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.748000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 17:26:30.753000 audit[2508]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2508 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:30.753000 audit[2508]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff675c4c0 a2=0 a3=0 items=0 ppid=2369 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.753000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 17:26:30.757000 audit[2510]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2510 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:30.757000 audit[2510]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffde286fa0 a2=0 a3=0 items=0 ppid=2369 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.757000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 17:26:30.762000 audit[2512]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2512 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:30.762000 audit[2512]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffff6fc28e0 a2=0 a3=0 items=0 ppid=2369 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.762000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 17:26:30.775157 (udev-worker)[2391]: Network interface NamePolicy= disabled on kernel command line. Dec 12 17:26:30.797000 audit[2517]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2517 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:30.797000 audit[2517]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffc551ac30 a2=0 a3=0 items=0 ppid=2369 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.797000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 12 17:26:30.802000 audit[2519]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2519 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:30.802000 audit[2519]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffe3425ef0 a2=0 a3=0 items=0 ppid=2369 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.802000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 12 17:26:30.821000 audit[2527]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:30.821000 audit[2527]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffed814710 a2=0 a3=0 items=0 ppid=2369 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.821000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 12 17:26:30.839000 audit[2533]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:30.839000 audit[2533]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd1e05900 a2=0 a3=0 items=0 ppid=2369 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.839000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 12 17:26:30.845000 audit[2535]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:30.845000 audit[2535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffdf1476c0 a2=0 a3=0 items=0 ppid=2369 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.845000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 12 17:26:30.850000 audit[2537]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:30.850000 audit[2537]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffebdd4000 a2=0 a3=0 items=0 ppid=2369 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.850000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 12 17:26:30.854000 audit[2539]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:30.854000 audit[2539]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffc60e1a50 a2=0 a3=0 items=0 ppid=2369 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.854000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:26:30.859000 audit[2541]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:30.859000 audit[2541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffdd8df8a0 a2=0 a3=0 items=0 ppid=2369 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.859000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 12 17:26:30.862163 systemd-networkd[1585]: docker0: Link UP Dec 12 17:26:30.867693 dockerd[2369]: time="2025-12-12T17:26:30.867643064Z" level=info msg="Loading containers: done." Dec 12 17:26:30.897712 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4210637270-merged.mount: Deactivated successfully. Dec 12 17:26:30.915707 dockerd[2369]: time="2025-12-12T17:26:30.914980856Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 17:26:30.915707 dockerd[2369]: time="2025-12-12T17:26:30.915108908Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 17:26:30.915707 dockerd[2369]: time="2025-12-12T17:26:30.915415976Z" level=info msg="Initializing buildkit" Dec 12 17:26:30.971701 dockerd[2369]: time="2025-12-12T17:26:30.971646357Z" level=info msg="Completed buildkit initialization" Dec 12 17:26:30.986760 dockerd[2369]: time="2025-12-12T17:26:30.986456997Z" level=info msg="Daemon has completed initialization" Dec 12 17:26:30.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:30.987100 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 17:26:30.989403 dockerd[2369]: time="2025-12-12T17:26:30.988167417Z" level=info msg="API listen on /run/docker.sock" Dec 12 17:26:31.910871 containerd[2012]: time="2025-12-12T17:26:31.910765977Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 12 17:26:32.756050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3141649527.mount: Deactivated successfully. Dec 12 17:26:34.046790 containerd[2012]: time="2025-12-12T17:26:34.046722440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:34.050128 containerd[2012]: time="2025-12-12T17:26:34.050067836Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=22974850" Dec 12 17:26:34.052741 containerd[2012]: time="2025-12-12T17:26:34.052641224Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:34.058524 containerd[2012]: time="2025-12-12T17:26:34.058442252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:34.061402 containerd[2012]: time="2025-12-12T17:26:34.060471512Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 2.149627811s" Dec 12 17:26:34.061402 containerd[2012]: time="2025-12-12T17:26:34.060529304Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Dec 12 17:26:34.062123 containerd[2012]: time="2025-12-12T17:26:34.062083892Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 12 17:26:35.497050 containerd[2012]: time="2025-12-12T17:26:35.496984643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:35.498732 containerd[2012]: time="2025-12-12T17:26:35.498659543Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19127323" Dec 12 17:26:35.499905 containerd[2012]: time="2025-12-12T17:26:35.499763363Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:35.504372 containerd[2012]: time="2025-12-12T17:26:35.504291767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:35.506528 containerd[2012]: time="2025-12-12T17:26:35.506301467Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.444003783s" Dec 12 17:26:35.506528 containerd[2012]: time="2025-12-12T17:26:35.506360915Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Dec 12 17:26:35.508350 containerd[2012]: time="2025-12-12T17:26:35.508075511Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 12 17:26:36.036386 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 17:26:36.040395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:26:36.420902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:36.424618 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 12 17:26:36.424734 kernel: audit: type=1130 audit(1765560396.422:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:36.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:36.437571 (kubelet)[2654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:26:36.524097 kubelet[2654]: E1212 17:26:36.524003 2654 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:26:36.536428 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:26:36.536740 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:26:36.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:26:36.537822 systemd[1]: kubelet.service: Consumed 324ms CPU time, 107.3M memory peak. Dec 12 17:26:36.542915 kernel: audit: type=1131 audit(1765560396.536:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:26:36.873197 containerd[2012]: time="2025-12-12T17:26:36.873118982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:36.876274 containerd[2012]: time="2025-12-12T17:26:36.876180986Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14183580" Dec 12 17:26:36.878922 containerd[2012]: time="2025-12-12T17:26:36.878826350Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:36.884203 containerd[2012]: time="2025-12-12T17:26:36.884095898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:36.886599 containerd[2012]: time="2025-12-12T17:26:36.886011110Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.377882187s" Dec 12 17:26:36.886599 containerd[2012]: time="2025-12-12T17:26:36.886069598Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Dec 12 17:26:36.887047 containerd[2012]: time="2025-12-12T17:26:36.887008118Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 12 17:26:38.205481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1090992174.mount: Deactivated successfully. Dec 12 17:26:38.591800 containerd[2012]: time="2025-12-12T17:26:38.590660390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:38.592465 containerd[2012]: time="2025-12-12T17:26:38.592404686Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=0" Dec 12 17:26:38.594630 containerd[2012]: time="2025-12-12T17:26:38.594555254Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:38.598826 containerd[2012]: time="2025-12-12T17:26:38.598758386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:38.600132 containerd[2012]: time="2025-12-12T17:26:38.600086222Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.712920652s" Dec 12 17:26:38.600310 containerd[2012]: time="2025-12-12T17:26:38.600278210Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Dec 12 17:26:38.601154 containerd[2012]: time="2025-12-12T17:26:38.601088966Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 12 17:26:39.191314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount969089343.mount: Deactivated successfully. Dec 12 17:26:40.361185 containerd[2012]: time="2025-12-12T17:26:40.361097619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:40.364457 containerd[2012]: time="2025-12-12T17:26:40.363962883Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=19576083" Dec 12 17:26:40.366712 containerd[2012]: time="2025-12-12T17:26:40.366651339Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:40.372410 containerd[2012]: time="2025-12-12T17:26:40.372347691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:40.374556 containerd[2012]: time="2025-12-12T17:26:40.374497911Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.773343569s" Dec 12 17:26:40.374762 containerd[2012]: time="2025-12-12T17:26:40.374728575Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Dec 12 17:26:40.375672 containerd[2012]: time="2025-12-12T17:26:40.375627591Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 12 17:26:40.884790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount762376773.mount: Deactivated successfully. Dec 12 17:26:40.898162 containerd[2012]: time="2025-12-12T17:26:40.898087086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:40.902081 containerd[2012]: time="2025-12-12T17:26:40.901959858Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Dec 12 17:26:40.904230 containerd[2012]: time="2025-12-12T17:26:40.904128606Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:40.911885 containerd[2012]: time="2025-12-12T17:26:40.911065878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:40.912987 containerd[2012]: time="2025-12-12T17:26:40.912945030Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 536.885751ms" Dec 12 17:26:40.913144 containerd[2012]: time="2025-12-12T17:26:40.913117254Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Dec 12 17:26:40.914644 containerd[2012]: time="2025-12-12T17:26:40.914576622Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 12 17:26:41.532461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount907911982.mount: Deactivated successfully. Dec 12 17:26:44.846114 containerd[2012]: time="2025-12-12T17:26:44.846058185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:44.849270 containerd[2012]: time="2025-12-12T17:26:44.849191421Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=85823552" Dec 12 17:26:44.852047 containerd[2012]: time="2025-12-12T17:26:44.851968689Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:44.859873 containerd[2012]: time="2025-12-12T17:26:44.857784513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:44.859873 containerd[2012]: time="2025-12-12T17:26:44.859803430Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.944980976s" Dec 12 17:26:44.860104 containerd[2012]: time="2025-12-12T17:26:44.859843030Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Dec 12 17:26:46.786457 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 17:26:46.791297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:26:47.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:47.121407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:47.131013 kernel: audit: type=1130 audit(1765560407.120:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:47.145291 (kubelet)[2807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:26:47.226710 kubelet[2807]: E1212 17:26:47.226637 2807 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:26:47.231830 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:26:47.232342 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:26:47.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:26:47.237735 systemd[1]: kubelet.service: Consumed 296ms CPU time, 106.4M memory peak. Dec 12 17:26:47.241145 kernel: audit: type=1131 audit(1765560407.232:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:26:51.555709 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 12 17:26:51.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:51.568898 kernel: audit: type=1131 audit(1765560411.555:305): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:51.575000 audit: BPF prog-id=66 op=UNLOAD Dec 12 17:26:51.578892 kernel: audit: type=1334 audit(1765560411.575:306): prog-id=66 op=UNLOAD Dec 12 17:26:51.925815 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:51.926714 systemd[1]: kubelet.service: Consumed 296ms CPU time, 106.4M memory peak. Dec 12 17:26:51.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:51.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:51.936395 kernel: audit: type=1130 audit(1765560411.925:307): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:51.936495 kernel: audit: type=1131 audit(1765560411.925:308): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:51.934306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:26:51.992136 systemd[1]: Reload requested from client PID 2824 ('systemctl') (unit session-7.scope)... Dec 12 17:26:51.992173 systemd[1]: Reloading... Dec 12 17:26:52.231881 zram_generator::config[2874]: No configuration found. Dec 12 17:26:52.737584 systemd[1]: Reloading finished in 744 ms. Dec 12 17:26:52.792000 audit: BPF prog-id=70 op=LOAD Dec 12 17:26:52.792000 audit: BPF prog-id=71 op=LOAD Dec 12 17:26:52.798351 kernel: audit: type=1334 audit(1765560412.792:309): prog-id=70 op=LOAD Dec 12 17:26:52.798442 kernel: audit: type=1334 audit(1765560412.792:310): prog-id=71 op=LOAD Dec 12 17:26:52.798489 kernel: audit: type=1334 audit(1765560412.793:311): prog-id=47 op=UNLOAD Dec 12 17:26:52.793000 audit: BPF prog-id=47 op=UNLOAD Dec 12 17:26:52.801588 kernel: audit: type=1334 audit(1765560412.793:312): prog-id=48 op=UNLOAD Dec 12 17:26:52.793000 audit: BPF prog-id=48 op=UNLOAD Dec 12 17:26:52.795000 audit: BPF prog-id=72 op=LOAD Dec 12 17:26:52.805882 kernel: audit: type=1334 audit(1765560412.795:313): prog-id=72 op=LOAD Dec 12 17:26:52.805989 kernel: audit: type=1334 audit(1765560412.795:314): prog-id=50 op=UNLOAD Dec 12 17:26:52.806052 kernel: audit: type=1334 audit(1765560412.801:315): prog-id=73 op=LOAD Dec 12 17:26:52.795000 audit: BPF prog-id=50 op=UNLOAD Dec 12 17:26:52.801000 audit: BPF prog-id=73 op=LOAD Dec 12 17:26:52.808691 kernel: audit: type=1334 audit(1765560412.801:316): prog-id=49 op=UNLOAD Dec 12 17:26:52.808780 kernel: audit: type=1334 audit(1765560412.804:317): prog-id=74 op=LOAD Dec 12 17:26:52.801000 audit: BPF prog-id=49 op=UNLOAD Dec 12 17:26:52.811699 kernel: audit: type=1334 audit(1765560412.804:318): prog-id=60 op=UNLOAD Dec 12 17:26:52.804000 audit: BPF prog-id=74 op=LOAD Dec 12 17:26:52.804000 audit: BPF prog-id=60 op=UNLOAD Dec 12 17:26:52.806000 audit: BPF prog-id=75 op=LOAD Dec 12 17:26:52.810000 audit: BPF prog-id=76 op=LOAD Dec 12 17:26:52.810000 audit: BPF prog-id=61 op=UNLOAD Dec 12 17:26:52.810000 audit: BPF prog-id=62 op=UNLOAD Dec 12 17:26:52.815000 audit: BPF prog-id=77 op=LOAD Dec 12 17:26:52.815000 audit: BPF prog-id=57 op=UNLOAD Dec 12 17:26:52.815000 audit: BPF prog-id=78 op=LOAD Dec 12 17:26:52.815000 audit: BPF prog-id=79 op=LOAD Dec 12 17:26:52.815000 audit: BPF prog-id=58 op=UNLOAD Dec 12 17:26:52.815000 audit: BPF prog-id=59 op=UNLOAD Dec 12 17:26:52.823000 audit: BPF prog-id=80 op=LOAD Dec 12 17:26:52.823000 audit: BPF prog-id=69 op=UNLOAD Dec 12 17:26:52.825000 audit: BPF prog-id=81 op=LOAD Dec 12 17:26:52.825000 audit: BPF prog-id=54 op=UNLOAD Dec 12 17:26:52.825000 audit: BPF prog-id=82 op=LOAD Dec 12 17:26:52.825000 audit: BPF prog-id=83 op=LOAD Dec 12 17:26:52.825000 audit: BPF prog-id=55 op=UNLOAD Dec 12 17:26:52.825000 audit: BPF prog-id=56 op=UNLOAD Dec 12 17:26:52.828000 audit: BPF prog-id=84 op=LOAD Dec 12 17:26:52.828000 audit: BPF prog-id=63 op=UNLOAD Dec 12 17:26:52.828000 audit: BPF prog-id=85 op=LOAD Dec 12 17:26:52.828000 audit: BPF prog-id=86 op=LOAD Dec 12 17:26:52.828000 audit: BPF prog-id=64 op=UNLOAD Dec 12 17:26:52.828000 audit: BPF prog-id=65 op=UNLOAD Dec 12 17:26:52.831000 audit: BPF prog-id=87 op=LOAD Dec 12 17:26:52.831000 audit: BPF prog-id=51 op=UNLOAD Dec 12 17:26:52.831000 audit: BPF prog-id=88 op=LOAD Dec 12 17:26:52.832000 audit: BPF prog-id=89 op=LOAD Dec 12 17:26:52.832000 audit: BPF prog-id=52 op=UNLOAD Dec 12 17:26:52.832000 audit: BPF prog-id=53 op=UNLOAD Dec 12 17:26:52.860068 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 17:26:52.860279 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 17:26:52.861159 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:52.861329 systemd[1]: kubelet.service: Consumed 229ms CPU time, 95M memory peak. Dec 12 17:26:52.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:26:52.864507 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:26:53.193802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:53.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:53.208400 (kubelet)[2935]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:26:53.290455 kubelet[2935]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:26:53.291268 kubelet[2935]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:26:53.293883 kubelet[2935]: I1212 17:26:53.292274 2935 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:26:54.857445 kubelet[2935]: I1212 17:26:54.857395 2935 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 12 17:26:54.858099 kubelet[2935]: I1212 17:26:54.858073 2935 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:26:54.860480 kubelet[2935]: I1212 17:26:54.860449 2935 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 12 17:26:54.860631 kubelet[2935]: I1212 17:26:54.860608 2935 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:26:54.861135 kubelet[2935]: I1212 17:26:54.861111 2935 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:26:54.873385 kubelet[2935]: E1212 17:26:54.873320 2935 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.55:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 17:26:54.876250 kubelet[2935]: I1212 17:26:54.876197 2935 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:26:54.884170 kubelet[2935]: I1212 17:26:54.884128 2935 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:26:54.889807 kubelet[2935]: I1212 17:26:54.889759 2935 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 12 17:26:54.890287 kubelet[2935]: I1212 17:26:54.890225 2935 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:26:54.890544 kubelet[2935]: I1212 17:26:54.890276 2935 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-55","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:26:54.890544 kubelet[2935]: I1212 17:26:54.890535 2935 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:26:54.890810 kubelet[2935]: I1212 17:26:54.890557 2935 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 17:26:54.890810 kubelet[2935]: I1212 17:26:54.890741 2935 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 12 17:26:54.896241 kubelet[2935]: I1212 17:26:54.896183 2935 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:26:54.898595 kubelet[2935]: I1212 17:26:54.898541 2935 kubelet.go:475] "Attempting to sync node with API server" Dec 12 17:26:54.898595 kubelet[2935]: I1212 17:26:54.898588 2935 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:26:54.898773 kubelet[2935]: I1212 17:26:54.898652 2935 kubelet.go:387] "Adding apiserver pod source" Dec 12 17:26:54.898773 kubelet[2935]: I1212 17:26:54.898686 2935 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:26:54.902900 kubelet[2935]: E1212 17:26:54.900969 2935 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 17:26:54.902900 kubelet[2935]: I1212 17:26:54.901369 2935 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 17:26:54.902900 kubelet[2935]: I1212 17:26:54.902805 2935 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:26:54.903126 kubelet[2935]: I1212 17:26:54.902944 2935 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 12 17:26:54.903126 kubelet[2935]: W1212 17:26:54.903002 2935 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 17:26:54.907308 kubelet[2935]: I1212 17:26:54.907190 2935 server.go:1262] "Started kubelet" Dec 12 17:26:54.912737 kubelet[2935]: E1212 17:26:54.912675 2935 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-55&limit=500&resourceVersion=0\": dial tcp 172.31.16.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 17:26:54.913175 kubelet[2935]: I1212 17:26:54.913134 2935 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:26:54.913444 kubelet[2935]: I1212 17:26:54.913362 2935 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:26:54.913517 kubelet[2935]: I1212 17:26:54.913460 2935 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 12 17:26:54.919810 kubelet[2935]: I1212 17:26:54.919737 2935 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:26:54.921429 kubelet[2935]: I1212 17:26:54.921396 2935 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:26:54.922954 kubelet[2935]: E1212 17:26:54.920159 2935 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.55:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.55:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-55.188087d7d0082367 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-55,UID:ip-172-31-16-55,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-55,},FirstTimestamp:2025-12-12 17:26:54.907147111 +0000 UTC m=+1.692112161,LastTimestamp:2025-12-12 17:26:54.907147111 +0000 UTC m=+1.692112161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-55,}" Dec 12 17:26:54.926130 kubelet[2935]: I1212 17:26:54.926034 2935 server.go:310] "Adding debug handlers to kubelet server" Dec 12 17:26:54.934030 kubelet[2935]: I1212 17:26:54.933262 2935 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:26:54.936699 kubelet[2935]: E1212 17:26:54.933261 2935 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-55\" not found" Dec 12 17:26:54.939810 kubelet[2935]: E1212 17:26:54.939032 2935 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:26:54.939000 audit[2951]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2951 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:54.939000 audit[2951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd927caa0 a2=0 a3=0 items=0 ppid=2935 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:54.939000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 17:26:54.941020 kubelet[2935]: E1212 17:26:54.940973 2935 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-55?timeout=10s\": dial tcp 172.31.16.55:6443: connect: connection refused" interval="200ms" Dec 12 17:26:54.942196 kubelet[2935]: I1212 17:26:54.942155 2935 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:26:54.942496 kubelet[2935]: I1212 17:26:54.942465 2935 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:26:54.941000 audit[2952]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:54.941000 audit[2952]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc73c51c0 a2=0 a3=0 items=0 ppid=2935 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:54.941000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 17:26:54.945946 kubelet[2935]: E1212 17:26:54.945122 2935 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:26:54.945946 kubelet[2935]: I1212 17:26:54.945371 2935 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:26:54.946337 kubelet[2935]: I1212 17:26:54.946290 2935 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 12 17:26:54.946472 kubelet[2935]: I1212 17:26:54.946440 2935 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 17:26:54.947000 audit[2954]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:54.947000 audit[2954]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff12165e0 a2=0 a3=0 items=0 ppid=2935 pid=2954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:54.947000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:26:54.951046 kubelet[2935]: I1212 17:26:54.950996 2935 reconciler.go:29] "Reconciler: start to sync state" Dec 12 17:26:54.951000 audit[2956]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2956 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:54.951000 audit[2956]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd735b4e0 a2=0 a3=0 items=0 ppid=2935 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:54.951000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:26:54.963000 audit[2959]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:54.963000 audit[2959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffdc4ff960 a2=0 a3=0 items=0 ppid=2935 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:54.963000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Dec 12 17:26:54.964956 kubelet[2935]: I1212 17:26:54.964836 2935 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 12 17:26:54.965000 audit[2960]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2960 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:54.965000 audit[2960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc69538e0 a2=0 a3=0 items=0 ppid=2935 pid=2960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:54.965000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 17:26:54.967056 kubelet[2935]: I1212 17:26:54.967032 2935 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 12 17:26:54.967132 kubelet[2935]: I1212 17:26:54.967065 2935 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 12 17:26:54.967132 kubelet[2935]: I1212 17:26:54.967111 2935 kubelet.go:2427] "Starting kubelet main sync loop" Dec 12 17:26:54.967234 kubelet[2935]: E1212 17:26:54.967183 2935 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:26:54.968000 audit[2962]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:54.968000 audit[2962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcebbd6b0 a2=0 a3=0 items=0 ppid=2935 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:54.968000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 17:26:54.970000 audit[2963]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:54.970000 audit[2963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6df45c0 a2=0 a3=0 items=0 ppid=2935 pid=2963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:54.970000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 17:26:54.973000 audit[2964]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2964 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:54.973000 audit[2964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd66c2280 a2=0 a3=0 items=0 ppid=2935 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:54.973000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 17:26:54.975000 audit[2966]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:54.975000 audit[2966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe6fdc0f0 a2=0 a3=0 items=0 ppid=2935 pid=2966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:54.975000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 17:26:54.977290 kubelet[2935]: E1212 17:26:54.976806 2935 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 17:26:54.982000 audit[2968]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:54.982000 audit[2968]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd7111540 a2=0 a3=0 items=0 ppid=2935 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:54.982000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 17:26:54.987000 audit[2969]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2969 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:54.987000 audit[2969]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffffd68b40 a2=0 a3=0 items=0 ppid=2935 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:54.987000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 17:26:54.990912 kubelet[2935]: I1212 17:26:54.990749 2935 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:26:54.990912 kubelet[2935]: I1212 17:26:54.990780 2935 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:26:54.990912 kubelet[2935]: I1212 17:26:54.990813 2935 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:26:54.994822 kubelet[2935]: I1212 17:26:54.994779 2935 policy_none.go:49] "None policy: Start" Dec 12 17:26:54.994822 kubelet[2935]: I1212 17:26:54.994822 2935 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 12 17:26:54.995039 kubelet[2935]: I1212 17:26:54.994887 2935 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 12 17:26:54.998075 kubelet[2935]: I1212 17:26:54.998044 2935 policy_none.go:47] "Start" Dec 12 17:26:55.005647 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 17:26:55.024035 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 17:26:55.037287 kubelet[2935]: E1212 17:26:55.037232 2935 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-55\" not found" Dec 12 17:26:55.045646 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 17:26:55.049552 kubelet[2935]: E1212 17:26:55.049498 2935 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:26:55.051542 kubelet[2935]: I1212 17:26:55.051480 2935 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:26:55.051922 kubelet[2935]: I1212 17:26:55.051780 2935 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:26:55.053102 kubelet[2935]: I1212 17:26:55.053067 2935 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:26:55.055254 kubelet[2935]: E1212 17:26:55.055207 2935 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:26:55.055418 kubelet[2935]: E1212 17:26:55.055323 2935 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-55\" not found" Dec 12 17:26:55.092727 systemd[1]: Created slice kubepods-burstable-pod1457dd06cb0a124e23e6b6d835332137.slice - libcontainer container kubepods-burstable-pod1457dd06cb0a124e23e6b6d835332137.slice. Dec 12 17:26:55.110514 kubelet[2935]: E1212 17:26:55.108665 2935 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-55\" not found" node="ip-172-31-16-55" Dec 12 17:26:55.115463 systemd[1]: Created slice kubepods-burstable-pod15cca05e190f95ce81facfc51214d42a.slice - libcontainer container kubepods-burstable-pod15cca05e190f95ce81facfc51214d42a.slice. Dec 12 17:26:55.130489 kubelet[2935]: E1212 17:26:55.130446 2935 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-55\" not found" node="ip-172-31-16-55" Dec 12 17:26:55.139463 systemd[1]: Created slice kubepods-burstable-pod99caf63a4208ffbcc39b03501211a343.slice - libcontainer container kubepods-burstable-pod99caf63a4208ffbcc39b03501211a343.slice. Dec 12 17:26:55.142557 kubelet[2935]: E1212 17:26:55.142128 2935 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-55?timeout=10s\": dial tcp 172.31.16.55:6443: connect: connection refused" interval="400ms" Dec 12 17:26:55.145670 kubelet[2935]: E1212 17:26:55.145573 2935 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-55\" not found" node="ip-172-31-16-55" Dec 12 17:26:55.153677 kubelet[2935]: I1212 17:26:55.153635 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/15cca05e190f95ce81facfc51214d42a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-55\" (UID: \"15cca05e190f95ce81facfc51214d42a\") " pod="kube-system/kube-controller-manager-ip-172-31-16-55" Dec 12 17:26:55.153924 kubelet[2935]: I1212 17:26:55.153896 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/15cca05e190f95ce81facfc51214d42a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-55\" (UID: \"15cca05e190f95ce81facfc51214d42a\") " pod="kube-system/kube-controller-manager-ip-172-31-16-55" Dec 12 17:26:55.155029 kubelet[2935]: I1212 17:26:55.154967 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/15cca05e190f95ce81facfc51214d42a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-55\" (UID: \"15cca05e190f95ce81facfc51214d42a\") " pod="kube-system/kube-controller-manager-ip-172-31-16-55" Dec 12 17:26:55.155217 kubelet[2935]: I1212 17:26:55.155041 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1457dd06cb0a124e23e6b6d835332137-ca-certs\") pod \"kube-apiserver-ip-172-31-16-55\" (UID: \"1457dd06cb0a124e23e6b6d835332137\") " pod="kube-system/kube-apiserver-ip-172-31-16-55" Dec 12 17:26:55.155217 kubelet[2935]: I1212 17:26:55.155084 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1457dd06cb0a124e23e6b6d835332137-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-55\" (UID: \"1457dd06cb0a124e23e6b6d835332137\") " pod="kube-system/kube-apiserver-ip-172-31-16-55" Dec 12 17:26:55.155217 kubelet[2935]: I1212 17:26:55.155143 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/15cca05e190f95ce81facfc51214d42a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-55\" (UID: \"15cca05e190f95ce81facfc51214d42a\") " pod="kube-system/kube-controller-manager-ip-172-31-16-55" Dec 12 17:26:55.155217 kubelet[2935]: I1212 17:26:55.155185 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/99caf63a4208ffbcc39b03501211a343-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-55\" (UID: \"99caf63a4208ffbcc39b03501211a343\") " pod="kube-system/kube-scheduler-ip-172-31-16-55" Dec 12 17:26:55.155421 kubelet[2935]: I1212 17:26:55.155223 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1457dd06cb0a124e23e6b6d835332137-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-55\" (UID: \"1457dd06cb0a124e23e6b6d835332137\") " pod="kube-system/kube-apiserver-ip-172-31-16-55" Dec 12 17:26:55.155421 kubelet[2935]: I1212 17:26:55.155257 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/15cca05e190f95ce81facfc51214d42a-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-55\" (UID: \"15cca05e190f95ce81facfc51214d42a\") " pod="kube-system/kube-controller-manager-ip-172-31-16-55" Dec 12 17:26:55.156917 kubelet[2935]: I1212 17:26:55.156523 2935 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-55" Dec 12 17:26:55.157575 kubelet[2935]: E1212 17:26:55.157485 2935 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.55:6443/api/v1/nodes\": dial tcp 172.31.16.55:6443: connect: connection refused" node="ip-172-31-16-55" Dec 12 17:26:55.361001 kubelet[2935]: I1212 17:26:55.360748 2935 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-55" Dec 12 17:26:55.361886 kubelet[2935]: E1212 17:26:55.361801 2935 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.55:6443/api/v1/nodes\": dial tcp 172.31.16.55:6443: connect: connection refused" node="ip-172-31-16-55" Dec 12 17:26:55.417678 containerd[2012]: time="2025-12-12T17:26:55.417529398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-55,Uid:1457dd06cb0a124e23e6b6d835332137,Namespace:kube-system,Attempt:0,}" Dec 12 17:26:55.436063 containerd[2012]: time="2025-12-12T17:26:55.435985470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-55,Uid:15cca05e190f95ce81facfc51214d42a,Namespace:kube-system,Attempt:0,}" Dec 12 17:26:55.451402 containerd[2012]: time="2025-12-12T17:26:55.451256454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-55,Uid:99caf63a4208ffbcc39b03501211a343,Namespace:kube-system,Attempt:0,}" Dec 12 17:26:55.543719 kubelet[2935]: E1212 17:26:55.543317 2935 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-55?timeout=10s\": dial tcp 172.31.16.55:6443: connect: connection refused" interval="800ms" Dec 12 17:26:55.762056 kubelet[2935]: E1212 17:26:55.761919 2935 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-55&limit=500&resourceVersion=0\": dial tcp 172.31.16.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 17:26:55.764583 kubelet[2935]: I1212 17:26:55.764545 2935 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-55" Dec 12 17:26:55.765418 kubelet[2935]: E1212 17:26:55.765367 2935 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.55:6443/api/v1/nodes\": dial tcp 172.31.16.55:6443: connect: connection refused" node="ip-172-31-16-55" Dec 12 17:26:55.832044 kubelet[2935]: E1212 17:26:55.831974 2935 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 17:26:55.925639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount537018244.mount: Deactivated successfully. Dec 12 17:26:55.942313 containerd[2012]: time="2025-12-12T17:26:55.941058393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:26:55.948568 containerd[2012]: time="2025-12-12T17:26:55.948498357Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 17:26:55.957864 containerd[2012]: time="2025-12-12T17:26:55.957784905Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:26:55.959545 containerd[2012]: time="2025-12-12T17:26:55.959471109Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:26:55.961542 containerd[2012]: time="2025-12-12T17:26:55.961454901Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 17:26:55.965459 containerd[2012]: time="2025-12-12T17:26:55.965382345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:26:55.967687 containerd[2012]: time="2025-12-12T17:26:55.967611249Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 543.356487ms" Dec 12 17:26:55.970576 containerd[2012]: time="2025-12-12T17:26:55.970208337Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 17:26:55.970576 containerd[2012]: time="2025-12-12T17:26:55.970364961Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:26:55.974195 containerd[2012]: time="2025-12-12T17:26:55.974117217Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 533.386887ms" Dec 12 17:26:55.985015 containerd[2012]: time="2025-12-12T17:26:55.984793797Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 528.482643ms" Dec 12 17:26:56.010076 containerd[2012]: time="2025-12-12T17:26:56.010018433Z" level=info msg="connecting to shim e45731169668a0a75a1b9f8d95824aadc4be105a605d6d8ce57dbaedcf009f5b" address="unix:///run/containerd/s/67b126ce8b7138546ad516f12309de4aa67e73768df89715d1f3de4b94bbe450" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:56.063709 kubelet[2935]: E1212 17:26:56.063638 2935 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 17:26:56.075166 containerd[2012]: time="2025-12-12T17:26:56.075069569Z" level=info msg="connecting to shim ac2cab7dcd35a71f435e177dcd2b02d1edb8bd28619c0243bf09bd5844b25d2d" address="unix:///run/containerd/s/828992ed9b0d0293b59169a9faf9232ce4f1db369ee4f99cba40b38d58ade74d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:56.076592 containerd[2012]: time="2025-12-12T17:26:56.076513709Z" level=info msg="connecting to shim dbe49d6d26c660c019e017d67daf62c8bd662a9e01223bd97e5c5523cb364286" address="unix:///run/containerd/s/6dbf6cfa097d94462b59d547d4a17d4b9eb8ccfb5243212a732bc242ba9c542e" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:56.106531 systemd[1]: Started cri-containerd-e45731169668a0a75a1b9f8d95824aadc4be105a605d6d8ce57dbaedcf009f5b.scope - libcontainer container e45731169668a0a75a1b9f8d95824aadc4be105a605d6d8ce57dbaedcf009f5b. Dec 12 17:26:56.109031 kubelet[2935]: E1212 17:26:56.107982 2935 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:26:56.160595 systemd[1]: Started cri-containerd-ac2cab7dcd35a71f435e177dcd2b02d1edb8bd28619c0243bf09bd5844b25d2d.scope - libcontainer container ac2cab7dcd35a71f435e177dcd2b02d1edb8bd28619c0243bf09bd5844b25d2d. Dec 12 17:26:56.167000 audit: BPF prog-id=90 op=LOAD Dec 12 17:26:56.173000 audit: BPF prog-id=91 op=LOAD Dec 12 17:26:56.173000 audit[2993]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000170180 a2=98 a3=0 items=0 ppid=2982 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534353733313136393636386130613735613162396638643935383234 Dec 12 17:26:56.174000 audit: BPF prog-id=91 op=UNLOAD Dec 12 17:26:56.174000 audit[2993]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2982 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534353733313136393636386130613735613162396638643935383234 Dec 12 17:26:56.176000 audit: BPF prog-id=92 op=LOAD Dec 12 17:26:56.176000 audit[2993]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001703e8 a2=98 a3=0 items=0 ppid=2982 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.176000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534353733313136393636386130613735613162396638643935383234 Dec 12 17:26:56.176000 audit: BPF prog-id=93 op=LOAD Dec 12 17:26:56.176000 audit[2993]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000170168 a2=98 a3=0 items=0 ppid=2982 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.176000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534353733313136393636386130613735613162396638643935383234 Dec 12 17:26:56.180000 audit: BPF prog-id=93 op=UNLOAD Dec 12 17:26:56.180000 audit[2993]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2982 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534353733313136393636386130613735613162396638643935383234 Dec 12 17:26:56.180000 audit: BPF prog-id=92 op=UNLOAD Dec 12 17:26:56.180000 audit[2993]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2982 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534353733313136393636386130613735613162396638643935383234 Dec 12 17:26:56.180000 audit: BPF prog-id=94 op=LOAD Dec 12 17:26:56.180000 audit[2993]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000170648 a2=98 a3=0 items=0 ppid=2982 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534353733313136393636386130613735613162396638643935383234 Dec 12 17:26:56.188507 systemd[1]: Started cri-containerd-dbe49d6d26c660c019e017d67daf62c8bd662a9e01223bd97e5c5523cb364286.scope - libcontainer container dbe49d6d26c660c019e017d67daf62c8bd662a9e01223bd97e5c5523cb364286. Dec 12 17:26:56.201000 audit: BPF prog-id=95 op=LOAD Dec 12 17:26:56.203000 audit: BPF prog-id=96 op=LOAD Dec 12 17:26:56.203000 audit[3038]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000174180 a2=98 a3=0 items=0 ppid=3017 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.203000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163326361623764636433356137316634333565313737646364326230 Dec 12 17:26:56.203000 audit: BPF prog-id=96 op=UNLOAD Dec 12 17:26:56.203000 audit[3038]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3017 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.203000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163326361623764636433356137316634333565313737646364326230 Dec 12 17:26:56.205000 audit: BPF prog-id=97 op=LOAD Dec 12 17:26:56.205000 audit[3038]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001743e8 a2=98 a3=0 items=0 ppid=3017 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163326361623764636433356137316634333565313737646364326230 Dec 12 17:26:56.205000 audit: BPF prog-id=98 op=LOAD Dec 12 17:26:56.205000 audit[3038]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000174168 a2=98 a3=0 items=0 ppid=3017 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163326361623764636433356137316634333565313737646364326230 Dec 12 17:26:56.206000 audit: BPF prog-id=98 op=UNLOAD Dec 12 17:26:56.206000 audit[3038]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3017 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163326361623764636433356137316634333565313737646364326230 Dec 12 17:26:56.206000 audit: BPF prog-id=97 op=UNLOAD Dec 12 17:26:56.206000 audit[3038]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3017 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163326361623764636433356137316634333565313737646364326230 Dec 12 17:26:56.206000 audit: BPF prog-id=99 op=LOAD Dec 12 17:26:56.206000 audit[3038]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000174648 a2=98 a3=0 items=0 ppid=3017 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163326361623764636433356137316634333565313737646364326230 Dec 12 17:26:56.280000 audit: BPF prog-id=100 op=LOAD Dec 12 17:26:56.282038 containerd[2012]: time="2025-12-12T17:26:56.281415690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-55,Uid:1457dd06cb0a124e23e6b6d835332137,Namespace:kube-system,Attempt:0,} returns sandbox id \"e45731169668a0a75a1b9f8d95824aadc4be105a605d6d8ce57dbaedcf009f5b\"" Dec 12 17:26:56.282000 audit: BPF prog-id=101 op=LOAD Dec 12 17:26:56.282000 audit[3051]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=3019 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462653439643664323663363630633031396530313764363764616636 Dec 12 17:26:56.282000 audit: BPF prog-id=101 op=UNLOAD Dec 12 17:26:56.282000 audit[3051]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3019 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462653439643664323663363630633031396530313764363764616636 Dec 12 17:26:56.283000 audit: BPF prog-id=102 op=LOAD Dec 12 17:26:56.283000 audit[3051]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=3019 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462653439643664323663363630633031396530313764363764616636 Dec 12 17:26:56.283000 audit: BPF prog-id=103 op=LOAD Dec 12 17:26:56.283000 audit[3051]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=3019 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462653439643664323663363630633031396530313764363764616636 Dec 12 17:26:56.283000 audit: BPF prog-id=103 op=UNLOAD Dec 12 17:26:56.283000 audit[3051]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3019 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462653439643664323663363630633031396530313764363764616636 Dec 12 17:26:56.283000 audit: BPF prog-id=102 op=UNLOAD Dec 12 17:26:56.283000 audit[3051]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3019 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462653439643664323663363630633031396530313764363764616636 Dec 12 17:26:56.283000 audit: BPF prog-id=104 op=LOAD Dec 12 17:26:56.283000 audit[3051]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=3019 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462653439643664323663363630633031396530313764363764616636 Dec 12 17:26:56.308812 containerd[2012]: time="2025-12-12T17:26:56.308755998Z" level=info msg="CreateContainer within sandbox \"e45731169668a0a75a1b9f8d95824aadc4be105a605d6d8ce57dbaedcf009f5b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 17:26:56.313636 containerd[2012]: time="2025-12-12T17:26:56.313556082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-55,Uid:15cca05e190f95ce81facfc51214d42a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac2cab7dcd35a71f435e177dcd2b02d1edb8bd28619c0243bf09bd5844b25d2d\"" Dec 12 17:26:56.324038 containerd[2012]: time="2025-12-12T17:26:56.323668566Z" level=info msg="CreateContainer within sandbox \"ac2cab7dcd35a71f435e177dcd2b02d1edb8bd28619c0243bf09bd5844b25d2d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 17:26:56.334553 containerd[2012]: time="2025-12-12T17:26:56.334378051Z" level=info msg="Container 7a55d8877f76d9d4d52850e47b93f7fe6070402b6756d02e6e67d6f6b5b1fa0d: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:56.344660 kubelet[2935]: E1212 17:26:56.344191 2935 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-55?timeout=10s\": dial tcp 172.31.16.55:6443: connect: connection refused" interval="1.6s" Dec 12 17:26:56.361971 containerd[2012]: time="2025-12-12T17:26:56.361901083Z" level=info msg="CreateContainer within sandbox \"e45731169668a0a75a1b9f8d95824aadc4be105a605d6d8ce57dbaedcf009f5b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7a55d8877f76d9d4d52850e47b93f7fe6070402b6756d02e6e67d6f6b5b1fa0d\"" Dec 12 17:26:56.364396 containerd[2012]: time="2025-12-12T17:26:56.364310791Z" level=info msg="StartContainer for \"7a55d8877f76d9d4d52850e47b93f7fe6070402b6756d02e6e67d6f6b5b1fa0d\"" Dec 12 17:26:56.366269 containerd[2012]: time="2025-12-12T17:26:56.366201235Z" level=info msg="Container adda53a9b3f8f419a411be9eb221985f6e6edc2d3e4f496b97a44eb9fc7b6391: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:56.368527 containerd[2012]: time="2025-12-12T17:26:56.368372911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-55,Uid:99caf63a4208ffbcc39b03501211a343,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbe49d6d26c660c019e017d67daf62c8bd662a9e01223bd97e5c5523cb364286\"" Dec 12 17:26:56.371622 containerd[2012]: time="2025-12-12T17:26:56.370595863Z" level=info msg="connecting to shim 7a55d8877f76d9d4d52850e47b93f7fe6070402b6756d02e6e67d6f6b5b1fa0d" address="unix:///run/containerd/s/67b126ce8b7138546ad516f12309de4aa67e73768df89715d1f3de4b94bbe450" protocol=ttrpc version=3 Dec 12 17:26:56.381907 containerd[2012]: time="2025-12-12T17:26:56.381835687Z" level=info msg="CreateContainer within sandbox \"dbe49d6d26c660c019e017d67daf62c8bd662a9e01223bd97e5c5523cb364286\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 17:26:56.389510 containerd[2012]: time="2025-12-12T17:26:56.389387071Z" level=info msg="CreateContainer within sandbox \"ac2cab7dcd35a71f435e177dcd2b02d1edb8bd28619c0243bf09bd5844b25d2d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"adda53a9b3f8f419a411be9eb221985f6e6edc2d3e4f496b97a44eb9fc7b6391\"" Dec 12 17:26:56.393040 containerd[2012]: time="2025-12-12T17:26:56.392981383Z" level=info msg="StartContainer for \"adda53a9b3f8f419a411be9eb221985f6e6edc2d3e4f496b97a44eb9fc7b6391\"" Dec 12 17:26:56.397370 containerd[2012]: time="2025-12-12T17:26:56.397291531Z" level=info msg="connecting to shim adda53a9b3f8f419a411be9eb221985f6e6edc2d3e4f496b97a44eb9fc7b6391" address="unix:///run/containerd/s/828992ed9b0d0293b59169a9faf9232ce4f1db369ee4f99cba40b38d58ade74d" protocol=ttrpc version=3 Dec 12 17:26:56.414742 containerd[2012]: time="2025-12-12T17:26:56.413638555Z" level=info msg="Container ea8054c75408e125dc1bda354a1ffd54aa0f64299103a50a596ab2e98b304323: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:56.415570 systemd[1]: Started cri-containerd-7a55d8877f76d9d4d52850e47b93f7fe6070402b6756d02e6e67d6f6b5b1fa0d.scope - libcontainer container 7a55d8877f76d9d4d52850e47b93f7fe6070402b6756d02e6e67d6f6b5b1fa0d. Dec 12 17:26:56.442405 containerd[2012]: time="2025-12-12T17:26:56.442258279Z" level=info msg="CreateContainer within sandbox \"dbe49d6d26c660c019e017d67daf62c8bd662a9e01223bd97e5c5523cb364286\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ea8054c75408e125dc1bda354a1ffd54aa0f64299103a50a596ab2e98b304323\"" Dec 12 17:26:56.446285 containerd[2012]: time="2025-12-12T17:26:56.446066179Z" level=info msg="StartContainer for \"ea8054c75408e125dc1bda354a1ffd54aa0f64299103a50a596ab2e98b304323\"" Dec 12 17:26:56.450006 containerd[2012]: time="2025-12-12T17:26:56.449749111Z" level=info msg="connecting to shim ea8054c75408e125dc1bda354a1ffd54aa0f64299103a50a596ab2e98b304323" address="unix:///run/containerd/s/6dbf6cfa097d94462b59d547d4a17d4b9eb8ccfb5243212a732bc242ba9c542e" protocol=ttrpc version=3 Dec 12 17:26:56.461237 systemd[1]: Started cri-containerd-adda53a9b3f8f419a411be9eb221985f6e6edc2d3e4f496b97a44eb9fc7b6391.scope - libcontainer container adda53a9b3f8f419a411be9eb221985f6e6edc2d3e4f496b97a44eb9fc7b6391. Dec 12 17:26:56.467000 audit: BPF prog-id=105 op=LOAD Dec 12 17:26:56.468000 audit: BPF prog-id=106 op=LOAD Dec 12 17:26:56.468000 audit[3113]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2982 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761353564383837376637366439643464353238353065343762393366 Dec 12 17:26:56.469000 audit: BPF prog-id=106 op=UNLOAD Dec 12 17:26:56.469000 audit[3113]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2982 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761353564383837376637366439643464353238353065343762393366 Dec 12 17:26:56.469000 audit: BPF prog-id=107 op=LOAD Dec 12 17:26:56.469000 audit[3113]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2982 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761353564383837376637366439643464353238353065343762393366 Dec 12 17:26:56.470000 audit: BPF prog-id=108 op=LOAD Dec 12 17:26:56.470000 audit[3113]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2982 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761353564383837376637366439643464353238353065343762393366 Dec 12 17:26:56.470000 audit: BPF prog-id=108 op=UNLOAD Dec 12 17:26:56.470000 audit[3113]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2982 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761353564383837376637366439643464353238353065343762393366 Dec 12 17:26:56.470000 audit: BPF prog-id=107 op=UNLOAD Dec 12 17:26:56.470000 audit[3113]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2982 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761353564383837376637366439643464353238353065343762393366 Dec 12 17:26:56.471000 audit: BPF prog-id=109 op=LOAD Dec 12 17:26:56.471000 audit[3113]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2982 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761353564383837376637366439643464353238353065343762393366 Dec 12 17:26:56.511197 systemd[1]: Started cri-containerd-ea8054c75408e125dc1bda354a1ffd54aa0f64299103a50a596ab2e98b304323.scope - libcontainer container ea8054c75408e125dc1bda354a1ffd54aa0f64299103a50a596ab2e98b304323. Dec 12 17:26:56.516000 audit: BPF prog-id=110 op=LOAD Dec 12 17:26:56.519000 audit: BPF prog-id=111 op=LOAD Dec 12 17:26:56.519000 audit[3126]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3017 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164646135336139623366386634313961343131626539656232323139 Dec 12 17:26:56.519000 audit: BPF prog-id=111 op=UNLOAD Dec 12 17:26:56.519000 audit[3126]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3017 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164646135336139623366386634313961343131626539656232323139 Dec 12 17:26:56.520000 audit: BPF prog-id=112 op=LOAD Dec 12 17:26:56.520000 audit[3126]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3017 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164646135336139623366386634313961343131626539656232323139 Dec 12 17:26:56.521000 audit: BPF prog-id=113 op=LOAD Dec 12 17:26:56.521000 audit[3126]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3017 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.521000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164646135336139623366386634313961343131626539656232323139 Dec 12 17:26:56.522000 audit: BPF prog-id=113 op=UNLOAD Dec 12 17:26:56.522000 audit[3126]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3017 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164646135336139623366386634313961343131626539656232323139 Dec 12 17:26:56.522000 audit: BPF prog-id=112 op=UNLOAD Dec 12 17:26:56.522000 audit[3126]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3017 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164646135336139623366386634313961343131626539656232323139 Dec 12 17:26:56.524000 audit: BPF prog-id=114 op=LOAD Dec 12 17:26:56.524000 audit[3126]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3017 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164646135336139623366386634313961343131626539656232323139 Dec 12 17:26:56.561000 audit: BPF prog-id=115 op=LOAD Dec 12 17:26:56.562000 audit: BPF prog-id=116 op=LOAD Dec 12 17:26:56.562000 audit[3145]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=3019 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561383035346337353430386531323564633162646133353461316666 Dec 12 17:26:56.562000 audit: BPF prog-id=116 op=UNLOAD Dec 12 17:26:56.562000 audit[3145]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3019 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561383035346337353430386531323564633162646133353461316666 Dec 12 17:26:56.564000 audit: BPF prog-id=117 op=LOAD Dec 12 17:26:56.564000 audit[3145]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=3019 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561383035346337353430386531323564633162646133353461316666 Dec 12 17:26:56.564000 audit: BPF prog-id=118 op=LOAD Dec 12 17:26:56.564000 audit[3145]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=3019 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561383035346337353430386531323564633162646133353461316666 Dec 12 17:26:56.564000 audit: BPF prog-id=118 op=UNLOAD Dec 12 17:26:56.564000 audit[3145]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3019 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561383035346337353430386531323564633162646133353461316666 Dec 12 17:26:56.564000 audit: BPF prog-id=117 op=UNLOAD Dec 12 17:26:56.564000 audit[3145]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3019 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561383035346337353430386531323564633162646133353461316666 Dec 12 17:26:56.565000 audit: BPF prog-id=119 op=LOAD Dec 12 17:26:56.565000 audit[3145]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=3019 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561383035346337353430386531323564633162646133353461316666 Dec 12 17:26:56.572562 kubelet[2935]: I1212 17:26:56.572435 2935 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-55" Dec 12 17:26:56.573705 kubelet[2935]: E1212 17:26:56.573632 2935 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.55:6443/api/v1/nodes\": dial tcp 172.31.16.55:6443: connect: connection refused" node="ip-172-31-16-55" Dec 12 17:26:56.602259 containerd[2012]: time="2025-12-12T17:26:56.602066156Z" level=info msg="StartContainer for \"7a55d8877f76d9d4d52850e47b93f7fe6070402b6756d02e6e67d6f6b5b1fa0d\" returns successfully" Dec 12 17:26:56.638963 containerd[2012]: time="2025-12-12T17:26:56.638900948Z" level=info msg="StartContainer for \"adda53a9b3f8f419a411be9eb221985f6e6edc2d3e4f496b97a44eb9fc7b6391\" returns successfully" Dec 12 17:26:56.739007 containerd[2012]: time="2025-12-12T17:26:56.738326997Z" level=info msg="StartContainer for \"ea8054c75408e125dc1bda354a1ffd54aa0f64299103a50a596ab2e98b304323\" returns successfully" Dec 12 17:26:57.019443 kubelet[2935]: E1212 17:26:57.019056 2935 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-55\" not found" node="ip-172-31-16-55" Dec 12 17:26:57.025941 kubelet[2935]: E1212 17:26:57.025907 2935 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-55\" not found" node="ip-172-31-16-55" Dec 12 17:26:57.037876 kubelet[2935]: E1212 17:26:57.037816 2935 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-55\" not found" node="ip-172-31-16-55" Dec 12 17:26:58.043514 kubelet[2935]: E1212 17:26:58.043466 2935 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-55\" not found" node="ip-172-31-16-55" Dec 12 17:26:58.044214 kubelet[2935]: E1212 17:26:58.043788 2935 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-55\" not found" node="ip-172-31-16-55" Dec 12 17:26:58.176467 kubelet[2935]: I1212 17:26:58.176397 2935 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-55" Dec 12 17:27:00.905287 kubelet[2935]: I1212 17:27:00.904955 2935 apiserver.go:52] "Watching apiserver" Dec 12 17:27:01.003502 kubelet[2935]: E1212 17:27:01.003427 2935 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-55\" not found" node="ip-172-31-16-55" Dec 12 17:27:01.027709 kubelet[2935]: I1212 17:27:01.027631 2935 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-55" Dec 12 17:27:01.033900 kubelet[2935]: I1212 17:27:01.033813 2935 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-55" Dec 12 17:27:01.046577 kubelet[2935]: I1212 17:27:01.046546 2935 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 17:27:01.136378 kubelet[2935]: E1212 17:27:01.135881 2935 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-55\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-16-55" Dec 12 17:27:01.136378 kubelet[2935]: I1212 17:27:01.135931 2935 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-55" Dec 12 17:27:01.160383 kubelet[2935]: E1212 17:27:01.159759 2935 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-55\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-16-55" Dec 12 17:27:01.160383 kubelet[2935]: I1212 17:27:01.159832 2935 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-55" Dec 12 17:27:01.181410 kubelet[2935]: E1212 17:27:01.181327 2935 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-55\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-16-55" Dec 12 17:27:02.594052 kubelet[2935]: I1212 17:27:02.593658 2935 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-55" Dec 12 17:27:03.700570 systemd[1]: Reload requested from client PID 3218 ('systemctl') (unit session-7.scope)... Dec 12 17:27:03.700595 systemd[1]: Reloading... Dec 12 17:27:03.886891 zram_generator::config[3268]: No configuration found. Dec 12 17:27:04.408459 systemd[1]: Reloading finished in 707 ms. Dec 12 17:27:04.477469 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:27:04.493710 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:27:04.494988 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:27:04.495107 systemd[1]: kubelet.service: Consumed 2.493s CPU time, 121.4M memory peak. Dec 12 17:27:04.503322 kernel: kauditd_printk_skb: 200 callbacks suppressed Dec 12 17:27:04.503447 kernel: audit: type=1131 audit(1765560424.494:411): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:04.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:04.500317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:27:04.503000 audit: BPF prog-id=120 op=LOAD Dec 12 17:27:04.506321 kernel: audit: type=1334 audit(1765560424.503:412): prog-id=120 op=LOAD Dec 12 17:27:04.503000 audit: BPF prog-id=80 op=UNLOAD Dec 12 17:27:04.508179 kernel: audit: type=1334 audit(1765560424.503:413): prog-id=80 op=UNLOAD Dec 12 17:27:04.513000 audit: BPF prog-id=121 op=LOAD Dec 12 17:27:04.513000 audit: BPF prog-id=84 op=UNLOAD Dec 12 17:27:04.517587 kernel: audit: type=1334 audit(1765560424.513:414): prog-id=121 op=LOAD Dec 12 17:27:04.517683 kernel: audit: type=1334 audit(1765560424.513:415): prog-id=84 op=UNLOAD Dec 12 17:27:04.514000 audit: BPF prog-id=122 op=LOAD Dec 12 17:27:04.519885 kernel: audit: type=1334 audit(1765560424.514:416): prog-id=122 op=LOAD Dec 12 17:27:04.518000 audit: BPF prog-id=123 op=LOAD Dec 12 17:27:04.519000 audit: BPF prog-id=85 op=UNLOAD Dec 12 17:27:04.525472 kernel: audit: type=1334 audit(1765560424.518:417): prog-id=123 op=LOAD Dec 12 17:27:04.525586 kernel: audit: type=1334 audit(1765560424.519:418): prog-id=85 op=UNLOAD Dec 12 17:27:04.519000 audit: BPF prog-id=86 op=UNLOAD Dec 12 17:27:04.527544 kernel: audit: type=1334 audit(1765560424.519:419): prog-id=86 op=UNLOAD Dec 12 17:27:04.521000 audit: BPF prog-id=124 op=LOAD Dec 12 17:27:04.529262 kernel: audit: type=1334 audit(1765560424.521:420): prog-id=124 op=LOAD Dec 12 17:27:04.521000 audit: BPF prog-id=77 op=UNLOAD Dec 12 17:27:04.523000 audit: BPF prog-id=125 op=LOAD Dec 12 17:27:04.524000 audit: BPF prog-id=126 op=LOAD Dec 12 17:27:04.524000 audit: BPF prog-id=78 op=UNLOAD Dec 12 17:27:04.524000 audit: BPF prog-id=79 op=UNLOAD Dec 12 17:27:04.529000 audit: BPF prog-id=127 op=LOAD Dec 12 17:27:04.535000 audit: BPF prog-id=81 op=UNLOAD Dec 12 17:27:04.535000 audit: BPF prog-id=128 op=LOAD Dec 12 17:27:04.535000 audit: BPF prog-id=129 op=LOAD Dec 12 17:27:04.535000 audit: BPF prog-id=82 op=UNLOAD Dec 12 17:27:04.535000 audit: BPF prog-id=83 op=UNLOAD Dec 12 17:27:04.537000 audit: BPF prog-id=130 op=LOAD Dec 12 17:27:04.537000 audit: BPF prog-id=131 op=LOAD Dec 12 17:27:04.537000 audit: BPF prog-id=70 op=UNLOAD Dec 12 17:27:04.537000 audit: BPF prog-id=71 op=UNLOAD Dec 12 17:27:04.542000 audit: BPF prog-id=132 op=LOAD Dec 12 17:27:04.542000 audit: BPF prog-id=87 op=UNLOAD Dec 12 17:27:04.542000 audit: BPF prog-id=133 op=LOAD Dec 12 17:27:04.542000 audit: BPF prog-id=134 op=LOAD Dec 12 17:27:04.542000 audit: BPF prog-id=88 op=UNLOAD Dec 12 17:27:04.542000 audit: BPF prog-id=89 op=UNLOAD Dec 12 17:27:04.552000 audit: BPF prog-id=135 op=LOAD Dec 12 17:27:04.552000 audit: BPF prog-id=73 op=UNLOAD Dec 12 17:27:04.556000 audit: BPF prog-id=136 op=LOAD Dec 12 17:27:04.556000 audit: BPF prog-id=74 op=UNLOAD Dec 12 17:27:04.556000 audit: BPF prog-id=137 op=LOAD Dec 12 17:27:04.556000 audit: BPF prog-id=138 op=LOAD Dec 12 17:27:04.556000 audit: BPF prog-id=75 op=UNLOAD Dec 12 17:27:04.556000 audit: BPF prog-id=76 op=UNLOAD Dec 12 17:27:04.560000 audit: BPF prog-id=139 op=LOAD Dec 12 17:27:04.560000 audit: BPF prog-id=72 op=UNLOAD Dec 12 17:27:04.948840 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:27:04.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:04.969108 (kubelet)[3325]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:27:05.091906 kubelet[3325]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:27:05.091906 kubelet[3325]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:27:05.091906 kubelet[3325]: I1212 17:27:05.090093 3325 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:27:05.108493 kubelet[3325]: I1212 17:27:05.108294 3325 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 12 17:27:05.108493 kubelet[3325]: I1212 17:27:05.108361 3325 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:27:05.108887 kubelet[3325]: I1212 17:27:05.108775 3325 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 12 17:27:05.108887 kubelet[3325]: I1212 17:27:05.108805 3325 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:27:05.109503 kubelet[3325]: I1212 17:27:05.109470 3325 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:27:05.112634 kubelet[3325]: I1212 17:27:05.112597 3325 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 17:27:05.117882 kubelet[3325]: I1212 17:27:05.117812 3325 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:27:05.127338 kubelet[3325]: I1212 17:27:05.127307 3325 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:27:05.135011 kubelet[3325]: I1212 17:27:05.134969 3325 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 12 17:27:05.135618 kubelet[3325]: I1212 17:27:05.135557 3325 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:27:05.136250 kubelet[3325]: I1212 17:27:05.135758 3325 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-55","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:27:05.136506 kubelet[3325]: I1212 17:27:05.136480 3325 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:27:05.136615 kubelet[3325]: I1212 17:27:05.136597 3325 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 17:27:05.136774 kubelet[3325]: I1212 17:27:05.136753 3325 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 12 17:27:05.140421 kubelet[3325]: I1212 17:27:05.140355 3325 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:27:05.141016 kubelet[3325]: I1212 17:27:05.140983 3325 kubelet.go:475] "Attempting to sync node with API server" Dec 12 17:27:05.141130 kubelet[3325]: I1212 17:27:05.141022 3325 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:27:05.141130 kubelet[3325]: I1212 17:27:05.141071 3325 kubelet.go:387] "Adding apiserver pod source" Dec 12 17:27:05.141130 kubelet[3325]: I1212 17:27:05.141099 3325 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:27:05.143815 kubelet[3325]: I1212 17:27:05.143767 3325 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 17:27:05.144753 kubelet[3325]: I1212 17:27:05.144706 3325 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:27:05.144831 kubelet[3325]: I1212 17:27:05.144770 3325 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 12 17:27:05.151150 kubelet[3325]: I1212 17:27:05.150957 3325 server.go:1262] "Started kubelet" Dec 12 17:27:05.154658 kubelet[3325]: I1212 17:27:05.154406 3325 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:27:05.160493 kubelet[3325]: I1212 17:27:05.160436 3325 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:27:05.163970 update_engine[1963]: I20251212 17:27:05.163906 1963 update_attempter.cc:509] Updating boot flags... Dec 12 17:27:05.166876 kubelet[3325]: I1212 17:27:05.164792 3325 server.go:310] "Adding debug handlers to kubelet server" Dec 12 17:27:05.171959 kubelet[3325]: I1212 17:27:05.171012 3325 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 12 17:27:05.171959 kubelet[3325]: E1212 17:27:05.171375 3325 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-55\" not found" Dec 12 17:27:05.171959 kubelet[3325]: I1212 17:27:05.171814 3325 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 17:27:05.176474 kubelet[3325]: I1212 17:27:05.175665 3325 reconciler.go:29] "Reconciler: start to sync state" Dec 12 17:27:05.190887 kubelet[3325]: I1212 17:27:05.180337 3325 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:27:05.190887 kubelet[3325]: I1212 17:27:05.188651 3325 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 12 17:27:05.194942 kubelet[3325]: I1212 17:27:05.194062 3325 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:27:05.197485 kubelet[3325]: I1212 17:27:05.184894 3325 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:27:05.250662 kubelet[3325]: I1212 17:27:05.248744 3325 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:27:05.251275 kubelet[3325]: I1212 17:27:05.250996 3325 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:27:05.274503 kubelet[3325]: E1212 17:27:05.274328 3325 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-55\" not found" Dec 12 17:27:05.322763 kubelet[3325]: I1212 17:27:05.322278 3325 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:27:05.327252 kubelet[3325]: E1212 17:27:05.324611 3325 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:27:05.383983 kubelet[3325]: I1212 17:27:05.383914 3325 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 12 17:27:05.422374 kubelet[3325]: I1212 17:27:05.422208 3325 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 12 17:27:05.422374 kubelet[3325]: I1212 17:27:05.422253 3325 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 12 17:27:05.422374 kubelet[3325]: I1212 17:27:05.422318 3325 kubelet.go:2427] "Starting kubelet main sync loop" Dec 12 17:27:05.422750 kubelet[3325]: E1212 17:27:05.422698 3325 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:27:05.524083 kubelet[3325]: E1212 17:27:05.523715 3325 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 17:27:05.669616 kubelet[3325]: I1212 17:27:05.668902 3325 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:27:05.669811 kubelet[3325]: I1212 17:27:05.669778 3325 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:27:05.670012 kubelet[3325]: I1212 17:27:05.669922 3325 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:27:05.671088 kubelet[3325]: I1212 17:27:05.671052 3325 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 17:27:05.672586 kubelet[3325]: I1212 17:27:05.672278 3325 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 17:27:05.674768 kubelet[3325]: I1212 17:27:05.672912 3325 policy_none.go:49] "None policy: Start" Dec 12 17:27:05.674768 kubelet[3325]: I1212 17:27:05.672946 3325 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 12 17:27:05.674768 kubelet[3325]: I1212 17:27:05.673094 3325 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 12 17:27:05.674768 kubelet[3325]: I1212 17:27:05.673401 3325 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 12 17:27:05.674768 kubelet[3325]: I1212 17:27:05.673420 3325 policy_none.go:47] "Start" Dec 12 17:27:05.705883 kubelet[3325]: E1212 17:27:05.704775 3325 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:27:05.717689 kubelet[3325]: I1212 17:27:05.716385 3325 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:27:05.717689 kubelet[3325]: I1212 17:27:05.716588 3325 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:27:05.722812 kubelet[3325]: I1212 17:27:05.721144 3325 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:27:05.728512 kubelet[3325]: I1212 17:27:05.725675 3325 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-55" Dec 12 17:27:05.744573 kubelet[3325]: I1212 17:27:05.734016 3325 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-55" Dec 12 17:27:05.754731 kubelet[3325]: I1212 17:27:05.740338 3325 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-55" Dec 12 17:27:05.771901 kubelet[3325]: E1212 17:27:05.770447 3325 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:27:05.804830 kubelet[3325]: I1212 17:27:05.803615 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/15cca05e190f95ce81facfc51214d42a-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-55\" (UID: \"15cca05e190f95ce81facfc51214d42a\") " pod="kube-system/kube-controller-manager-ip-172-31-16-55" Dec 12 17:27:05.804830 kubelet[3325]: I1212 17:27:05.803819 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/15cca05e190f95ce81facfc51214d42a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-55\" (UID: \"15cca05e190f95ce81facfc51214d42a\") " pod="kube-system/kube-controller-manager-ip-172-31-16-55" Dec 12 17:27:05.805490 kubelet[3325]: I1212 17:27:05.805168 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/15cca05e190f95ce81facfc51214d42a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-55\" (UID: \"15cca05e190f95ce81facfc51214d42a\") " pod="kube-system/kube-controller-manager-ip-172-31-16-55" Dec 12 17:27:05.809575 kubelet[3325]: I1212 17:27:05.808873 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/15cca05e190f95ce81facfc51214d42a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-55\" (UID: \"15cca05e190f95ce81facfc51214d42a\") " pod="kube-system/kube-controller-manager-ip-172-31-16-55" Dec 12 17:27:05.809575 kubelet[3325]: I1212 17:27:05.808984 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/99caf63a4208ffbcc39b03501211a343-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-55\" (UID: \"99caf63a4208ffbcc39b03501211a343\") " pod="kube-system/kube-scheduler-ip-172-31-16-55" Dec 12 17:27:05.809575 kubelet[3325]: I1212 17:27:05.809167 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1457dd06cb0a124e23e6b6d835332137-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-55\" (UID: \"1457dd06cb0a124e23e6b6d835332137\") " pod="kube-system/kube-apiserver-ip-172-31-16-55" Dec 12 17:27:05.809575 kubelet[3325]: I1212 17:27:05.809214 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/15cca05e190f95ce81facfc51214d42a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-55\" (UID: \"15cca05e190f95ce81facfc51214d42a\") " pod="kube-system/kube-controller-manager-ip-172-31-16-55" Dec 12 17:27:05.809575 kubelet[3325]: I1212 17:27:05.809361 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1457dd06cb0a124e23e6b6d835332137-ca-certs\") pod \"kube-apiserver-ip-172-31-16-55\" (UID: \"1457dd06cb0a124e23e6b6d835332137\") " pod="kube-system/kube-apiserver-ip-172-31-16-55" Dec 12 17:27:05.810426 kubelet[3325]: E1212 17:27:05.810354 3325 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-55\" already exists" pod="kube-system/kube-scheduler-ip-172-31-16-55" Dec 12 17:27:05.814078 kubelet[3325]: I1212 17:27:05.810595 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1457dd06cb0a124e23e6b6d835332137-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-55\" (UID: \"1457dd06cb0a124e23e6b6d835332137\") " pod="kube-system/kube-apiserver-ip-172-31-16-55" Dec 12 17:27:05.910877 kubelet[3325]: I1212 17:27:05.910594 3325 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-55" Dec 12 17:27:05.981372 kubelet[3325]: I1212 17:27:05.979906 3325 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-16-55" Dec 12 17:27:05.981372 kubelet[3325]: I1212 17:27:05.980018 3325 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-55" Dec 12 17:27:06.143336 kubelet[3325]: I1212 17:27:06.143138 3325 apiserver.go:52] "Watching apiserver" Dec 12 17:27:06.176181 kubelet[3325]: I1212 17:27:06.175960 3325 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 17:27:06.775692 kubelet[3325]: I1212 17:27:06.775583 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-55" podStartSLOduration=1.77553249 podStartE2EDuration="1.77553249s" podCreationTimestamp="2025-12-12 17:27:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:27:06.772289586 +0000 UTC m=+1.794742246" watchObservedRunningTime="2025-12-12 17:27:06.77553249 +0000 UTC m=+1.797985138" Dec 12 17:27:06.891388 kubelet[3325]: I1212 17:27:06.891303 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-55" podStartSLOduration=1.891281239 podStartE2EDuration="1.891281239s" podCreationTimestamp="2025-12-12 17:27:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:27:06.834485731 +0000 UTC m=+1.856938403" watchObservedRunningTime="2025-12-12 17:27:06.891281239 +0000 UTC m=+1.913733887" Dec 12 17:27:06.930570 kubelet[3325]: I1212 17:27:06.930494 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-55" podStartSLOduration=4.930472111 podStartE2EDuration="4.930472111s" podCreationTimestamp="2025-12-12 17:27:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:27:06.894122479 +0000 UTC m=+1.916575103" watchObservedRunningTime="2025-12-12 17:27:06.930472111 +0000 UTC m=+1.952924783" Dec 12 17:27:09.628116 kubelet[3325]: I1212 17:27:09.627827 3325 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 17:27:09.630374 containerd[2012]: time="2025-12-12T17:27:09.630305901Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 17:27:09.632457 kubelet[3325]: I1212 17:27:09.632183 3325 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 17:27:10.562349 systemd[1]: Created slice kubepods-besteffort-podb2a34d65_8021_4c93_955f_19a237262cbd.slice - libcontainer container kubepods-besteffort-podb2a34d65_8021_4c93_955f_19a237262cbd.slice. Dec 12 17:27:10.665640 kubelet[3325]: I1212 17:27:10.665569 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b2a34d65-8021-4c93-955f-19a237262cbd-kube-proxy\") pod \"kube-proxy-44tt8\" (UID: \"b2a34d65-8021-4c93-955f-19a237262cbd\") " pod="kube-system/kube-proxy-44tt8" Dec 12 17:27:10.665640 kubelet[3325]: I1212 17:27:10.665642 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2a34d65-8021-4c93-955f-19a237262cbd-xtables-lock\") pod \"kube-proxy-44tt8\" (UID: \"b2a34d65-8021-4c93-955f-19a237262cbd\") " pod="kube-system/kube-proxy-44tt8" Dec 12 17:27:10.666293 kubelet[3325]: I1212 17:27:10.665680 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2a34d65-8021-4c93-955f-19a237262cbd-lib-modules\") pod \"kube-proxy-44tt8\" (UID: \"b2a34d65-8021-4c93-955f-19a237262cbd\") " pod="kube-system/kube-proxy-44tt8" Dec 12 17:27:10.666293 kubelet[3325]: I1212 17:27:10.665719 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlgb6\" (UniqueName: \"kubernetes.io/projected/b2a34d65-8021-4c93-955f-19a237262cbd-kube-api-access-zlgb6\") pod \"kube-proxy-44tt8\" (UID: \"b2a34d65-8021-4c93-955f-19a237262cbd\") " pod="kube-system/kube-proxy-44tt8" Dec 12 17:27:10.853143 systemd[1]: Created slice kubepods-besteffort-pod8b694eca_2a96_4f59_951f_b83b88768a93.slice - libcontainer container kubepods-besteffort-pod8b694eca_2a96_4f59_951f_b83b88768a93.slice. Dec 12 17:27:10.885651 containerd[2012]: time="2025-12-12T17:27:10.885597767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-44tt8,Uid:b2a34d65-8021-4c93-955f-19a237262cbd,Namespace:kube-system,Attempt:0,}" Dec 12 17:27:10.949825 containerd[2012]: time="2025-12-12T17:27:10.949278755Z" level=info msg="connecting to shim d21ee1dff57a9cccf214f29e983b9a4480f96561a7f3fb201ae51a768f01b012" address="unix:///run/containerd/s/0701edbd6ffc790e3a8069b1c1c8b673eef3281706e8ea907d5402935c79a64a" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:10.968091 kubelet[3325]: I1212 17:27:10.968047 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk7rv\" (UniqueName: \"kubernetes.io/projected/8b694eca-2a96-4f59-951f-b83b88768a93-kube-api-access-wk7rv\") pod \"tigera-operator-65cdcdfd6d-kk54b\" (UID: \"8b694eca-2a96-4f59-951f-b83b88768a93\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-kk54b" Dec 12 17:27:10.968366 kubelet[3325]: I1212 17:27:10.968337 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8b694eca-2a96-4f59-951f-b83b88768a93-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-kk54b\" (UID: \"8b694eca-2a96-4f59-951f-b83b88768a93\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-kk54b" Dec 12 17:27:10.994227 systemd[1]: Started cri-containerd-d21ee1dff57a9cccf214f29e983b9a4480f96561a7f3fb201ae51a768f01b012.scope - libcontainer container d21ee1dff57a9cccf214f29e983b9a4480f96561a7f3fb201ae51a768f01b012. Dec 12 17:27:11.025000 audit: BPF prog-id=140 op=LOAD Dec 12 17:27:11.028073 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 12 17:27:11.028142 kernel: audit: type=1334 audit(1765560431.025:453): prog-id=140 op=LOAD Dec 12 17:27:11.029000 audit: BPF prog-id=141 op=LOAD Dec 12 17:27:11.032778 kernel: audit: type=1334 audit(1765560431.029:454): prog-id=141 op=LOAD Dec 12 17:27:11.033161 kernel: audit: type=1300 audit(1765560431.029:454): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3566 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.029000 audit[3579]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3566 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432316565316466663537613963636366323134663239653938336239 Dec 12 17:27:11.045047 kernel: audit: type=1327 audit(1765560431.029:454): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432316565316466663537613963636366323134663239653938336239 Dec 12 17:27:11.029000 audit: BPF prog-id=141 op=UNLOAD Dec 12 17:27:11.047176 kernel: audit: type=1334 audit(1765560431.029:455): prog-id=141 op=UNLOAD Dec 12 17:27:11.029000 audit[3579]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3566 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.052990 kernel: audit: type=1300 audit(1765560431.029:455): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3566 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.053082 kernel: audit: type=1327 audit(1765560431.029:455): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432316565316466663537613963636366323134663239653938336239 Dec 12 17:27:11.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432316565316466663537613963636366323134663239653938336239 Dec 12 17:27:11.029000 audit: BPF prog-id=142 op=LOAD Dec 12 17:27:11.061512 kernel: audit: type=1334 audit(1765560431.029:456): prog-id=142 op=LOAD Dec 12 17:27:11.029000 audit[3579]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3566 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.068382 kernel: audit: type=1300 audit(1765560431.029:456): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3566 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432316565316466663537613963636366323134663239653938336239 Dec 12 17:27:11.077982 kernel: audit: type=1327 audit(1765560431.029:456): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432316565316466663537613963636366323134663239653938336239 Dec 12 17:27:11.029000 audit: BPF prog-id=143 op=LOAD Dec 12 17:27:11.029000 audit[3579]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3566 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432316565316466663537613963636366323134663239653938336239 Dec 12 17:27:11.029000 audit: BPF prog-id=143 op=UNLOAD Dec 12 17:27:11.029000 audit[3579]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3566 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432316565316466663537613963636366323134663239653938336239 Dec 12 17:27:11.029000 audit: BPF prog-id=142 op=UNLOAD Dec 12 17:27:11.029000 audit[3579]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3566 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432316565316466663537613963636366323134663239653938336239 Dec 12 17:27:11.029000 audit: BPF prog-id=144 op=LOAD Dec 12 17:27:11.029000 audit[3579]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3566 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432316565316466663537613963636366323134663239653938336239 Dec 12 17:27:11.104238 containerd[2012]: time="2025-12-12T17:27:11.104108936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-44tt8,Uid:b2a34d65-8021-4c93-955f-19a237262cbd,Namespace:kube-system,Attempt:0,} returns sandbox id \"d21ee1dff57a9cccf214f29e983b9a4480f96561a7f3fb201ae51a768f01b012\"" Dec 12 17:27:11.122973 containerd[2012]: time="2025-12-12T17:27:11.122121716Z" level=info msg="CreateContainer within sandbox \"d21ee1dff57a9cccf214f29e983b9a4480f96561a7f3fb201ae51a768f01b012\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 17:27:11.148169 containerd[2012]: time="2025-12-12T17:27:11.148118624Z" level=info msg="Container 267cf2e29d90886b2374c7e15f157c257fabfb7b136cd2b6c1ed746eab41d533: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:27:11.166973 containerd[2012]: time="2025-12-12T17:27:11.166838720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-kk54b,Uid:8b694eca-2a96-4f59-951f-b83b88768a93,Namespace:tigera-operator,Attempt:0,}" Dec 12 17:27:11.169100 containerd[2012]: time="2025-12-12T17:27:11.169052192Z" level=info msg="CreateContainer within sandbox \"d21ee1dff57a9cccf214f29e983b9a4480f96561a7f3fb201ae51a768f01b012\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"267cf2e29d90886b2374c7e15f157c257fabfb7b136cd2b6c1ed746eab41d533\"" Dec 12 17:27:11.171100 containerd[2012]: time="2025-12-12T17:27:11.171041216Z" level=info msg="StartContainer for \"267cf2e29d90886b2374c7e15f157c257fabfb7b136cd2b6c1ed746eab41d533\"" Dec 12 17:27:11.175207 containerd[2012]: time="2025-12-12T17:27:11.175155644Z" level=info msg="connecting to shim 267cf2e29d90886b2374c7e15f157c257fabfb7b136cd2b6c1ed746eab41d533" address="unix:///run/containerd/s/0701edbd6ffc790e3a8069b1c1c8b673eef3281706e8ea907d5402935c79a64a" protocol=ttrpc version=3 Dec 12 17:27:11.215253 systemd[1]: Started cri-containerd-267cf2e29d90886b2374c7e15f157c257fabfb7b136cd2b6c1ed746eab41d533.scope - libcontainer container 267cf2e29d90886b2374c7e15f157c257fabfb7b136cd2b6c1ed746eab41d533. Dec 12 17:27:11.245435 containerd[2012]: time="2025-12-12T17:27:11.245338281Z" level=info msg="connecting to shim b5525fbba541687bc4c990607e68864a00a0dc3054f1b1c97dc82f9721e585d3" address="unix:///run/containerd/s/cc8da83a028868cc65001f3a3be703833c91d412b00ce5ea61956dcae2d40e79" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:11.298238 systemd[1]: Started cri-containerd-b5525fbba541687bc4c990607e68864a00a0dc3054f1b1c97dc82f9721e585d3.scope - libcontainer container b5525fbba541687bc4c990607e68864a00a0dc3054f1b1c97dc82f9721e585d3. Dec 12 17:27:11.329000 audit: BPF prog-id=145 op=LOAD Dec 12 17:27:11.329000 audit[3604]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3566 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236376366326532396439303838366232333734633765313566313537 Dec 12 17:27:11.329000 audit: BPF prog-id=146 op=LOAD Dec 12 17:27:11.329000 audit[3604]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3566 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236376366326532396439303838366232333734633765313566313537 Dec 12 17:27:11.329000 audit: BPF prog-id=146 op=UNLOAD Dec 12 17:27:11.329000 audit[3604]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3566 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236376366326532396439303838366232333734633765313566313537 Dec 12 17:27:11.329000 audit: BPF prog-id=145 op=UNLOAD Dec 12 17:27:11.329000 audit[3604]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3566 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236376366326532396439303838366232333734633765313566313537 Dec 12 17:27:11.329000 audit: BPF prog-id=147 op=LOAD Dec 12 17:27:11.329000 audit[3604]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3566 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236376366326532396439303838366232333734633765313566313537 Dec 12 17:27:11.338000 audit: BPF prog-id=148 op=LOAD Dec 12 17:27:11.340000 audit: BPF prog-id=149 op=LOAD Dec 12 17:27:11.340000 audit[3642]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3623 pid=3642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235353235666262613534313638376263346339393036303765363838 Dec 12 17:27:11.340000 audit: BPF prog-id=149 op=UNLOAD Dec 12 17:27:11.340000 audit[3642]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3623 pid=3642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235353235666262613534313638376263346339393036303765363838 Dec 12 17:27:11.341000 audit: BPF prog-id=150 op=LOAD Dec 12 17:27:11.341000 audit[3642]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3623 pid=3642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235353235666262613534313638376263346339393036303765363838 Dec 12 17:27:11.344000 audit: BPF prog-id=151 op=LOAD Dec 12 17:27:11.344000 audit[3642]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3623 pid=3642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235353235666262613534313638376263346339393036303765363838 Dec 12 17:27:11.344000 audit: BPF prog-id=151 op=UNLOAD Dec 12 17:27:11.344000 audit[3642]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3623 pid=3642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235353235666262613534313638376263346339393036303765363838 Dec 12 17:27:11.344000 audit: BPF prog-id=150 op=UNLOAD Dec 12 17:27:11.344000 audit[3642]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3623 pid=3642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235353235666262613534313638376263346339393036303765363838 Dec 12 17:27:11.344000 audit: BPF prog-id=152 op=LOAD Dec 12 17:27:11.344000 audit[3642]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3623 pid=3642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235353235666262613534313638376263346339393036303765363838 Dec 12 17:27:11.385135 containerd[2012]: time="2025-12-12T17:27:11.384924849Z" level=info msg="StartContainer for \"267cf2e29d90886b2374c7e15f157c257fabfb7b136cd2b6c1ed746eab41d533\" returns successfully" Dec 12 17:27:11.442535 containerd[2012]: time="2025-12-12T17:27:11.442467394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-kk54b,Uid:8b694eca-2a96-4f59-951f-b83b88768a93,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b5525fbba541687bc4c990607e68864a00a0dc3054f1b1c97dc82f9721e585d3\"" Dec 12 17:27:11.448592 containerd[2012]: time="2025-12-12T17:27:11.448087210Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 17:27:11.772000 audit[3713]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3713 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:11.772000 audit[3713]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd2a4a990 a2=0 a3=1 items=0 ppid=3625 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.772000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 17:27:11.776000 audit[3715]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3715 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:11.776000 audit[3715]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcdb09140 a2=0 a3=1 items=0 ppid=3625 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.776000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 17:27:11.778000 audit[3718]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3718 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:11.778000 audit[3718]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcd698560 a2=0 a3=1 items=0 ppid=3625 pid=3718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.778000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 17:27:11.782000 audit[3719]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=3719 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:11.782000 audit[3719]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffddb71190 a2=0 a3=1 items=0 ppid=3625 pid=3719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.782000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 17:27:11.793182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2911837761.mount: Deactivated successfully. Dec 12 17:27:11.802000 audit[3720]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3720 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:11.802000 audit[3720]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe8e6ca50 a2=0 a3=1 items=0 ppid=3625 pid=3720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.802000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 17:27:11.812000 audit[3721]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3721 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:11.812000 audit[3721]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe3958050 a2=0 a3=1 items=0 ppid=3625 pid=3721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.812000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 17:27:11.883000 audit[3723]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3723 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:11.883000 audit[3723]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd2b11400 a2=0 a3=1 items=0 ppid=3625 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.883000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 17:27:11.890000 audit[3725]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3725 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:11.890000 audit[3725]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd339b7a0 a2=0 a3=1 items=0 ppid=3625 pid=3725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.890000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Dec 12 17:27:11.901000 audit[3728]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3728 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:11.901000 audit[3728]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff24941c0 a2=0 a3=1 items=0 ppid=3625 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.901000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 12 17:27:11.904000 audit[3729]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3729 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:11.904000 audit[3729]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc518ea00 a2=0 a3=1 items=0 ppid=3625 pid=3729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.904000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 17:27:11.911000 audit[3731]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3731 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:11.911000 audit[3731]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe8a8b410 a2=0 a3=1 items=0 ppid=3625 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.911000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 17:27:11.914000 audit[3732]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3732 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:11.914000 audit[3732]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd85459a0 a2=0 a3=1 items=0 ppid=3625 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.914000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 17:27:11.919000 audit[3734]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3734 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:11.919000 audit[3734]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc0ab9c10 a2=0 a3=1 items=0 ppid=3625 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.919000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:27:11.927000 audit[3737]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3737 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:11.927000 audit[3737]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc24c3d50 a2=0 a3=1 items=0 ppid=3625 pid=3737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.927000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:27:11.929000 audit[3738]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3738 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:11.929000 audit[3738]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd1fdb9d0 a2=0 a3=1 items=0 ppid=3625 pid=3738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.929000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 17:27:11.934000 audit[3740]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3740 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:11.934000 audit[3740]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe1bb4720 a2=0 a3=1 items=0 ppid=3625 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.934000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 17:27:11.937000 audit[3741]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3741 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:11.937000 audit[3741]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffead6c540 a2=0 a3=1 items=0 ppid=3625 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.937000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 17:27:11.943000 audit[3743]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3743 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:11.943000 audit[3743]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd3ef15d0 a2=0 a3=1 items=0 ppid=3625 pid=3743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.943000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Dec 12 17:27:11.951000 audit[3746]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3746 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:11.951000 audit[3746]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff378ab20 a2=0 a3=1 items=0 ppid=3625 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.951000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 12 17:27:11.959000 audit[3749]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3749 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:11.959000 audit[3749]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe97f2dc0 a2=0 a3=1 items=0 ppid=3625 pid=3749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.959000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 12 17:27:11.962000 audit[3750]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3750 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:11.962000 audit[3750]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdc38b2f0 a2=0 a3=1 items=0 ppid=3625 pid=3750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.962000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 17:27:11.967000 audit[3752]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3752 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:11.967000 audit[3752]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffdb687cd0 a2=0 a3=1 items=0 ppid=3625 pid=3752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.967000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:27:11.975000 audit[3755]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3755 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:11.975000 audit[3755]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc00b1e50 a2=0 a3=1 items=0 ppid=3625 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.975000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:27:11.980000 audit[3756]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3756 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:11.980000 audit[3756]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc4f29350 a2=0 a3=1 items=0 ppid=3625 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.980000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 17:27:11.986000 audit[3758]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3758 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:27:11.986000 audit[3758]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffff5ca7e90 a2=0 a3=1 items=0 ppid=3625 pid=3758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.986000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 17:27:12.033000 audit[3764]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3764 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:12.033000 audit[3764]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffa9a1300 a2=0 a3=1 items=0 ppid=3625 pid=3764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.033000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:12.043000 audit[3764]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3764 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:12.043000 audit[3764]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffffa9a1300 a2=0 a3=1 items=0 ppid=3625 pid=3764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.043000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:12.046000 audit[3769]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3769 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:12.046000 audit[3769]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffeebbd900 a2=0 a3=1 items=0 ppid=3625 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.046000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 17:27:12.053000 audit[3771]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3771 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:12.053000 audit[3771]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff50a1120 a2=0 a3=1 items=0 ppid=3625 pid=3771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.053000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 12 17:27:12.062000 audit[3774]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3774 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:12.062000 audit[3774]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffbec9fc0 a2=0 a3=1 items=0 ppid=3625 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.062000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Dec 12 17:27:12.064000 audit[3775]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3775 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:12.064000 audit[3775]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffee307070 a2=0 a3=1 items=0 ppid=3625 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.064000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 17:27:12.070000 audit[3777]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3777 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:12.070000 audit[3777]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdf26b090 a2=0 a3=1 items=0 ppid=3625 pid=3777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.070000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 17:27:12.072000 audit[3778]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3778 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:12.072000 audit[3778]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc2da9910 a2=0 a3=1 items=0 ppid=3625 pid=3778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.072000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 17:27:12.078000 audit[3780]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3780 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:12.078000 audit[3780]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd09539b0 a2=0 a3=1 items=0 ppid=3625 pid=3780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.078000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:27:12.089000 audit[3783]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3783 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:12.089000 audit[3783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffc9dbacc0 a2=0 a3=1 items=0 ppid=3625 pid=3783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.089000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:27:12.092000 audit[3784]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3784 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:12.092000 audit[3784]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffced4d190 a2=0 a3=1 items=0 ppid=3625 pid=3784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.092000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 17:27:12.098000 audit[3786]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3786 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:12.098000 audit[3786]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff1dbbfd0 a2=0 a3=1 items=0 ppid=3625 pid=3786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.098000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 17:27:12.101000 audit[3787]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3787 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:12.101000 audit[3787]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcf79c0f0 a2=0 a3=1 items=0 ppid=3625 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.101000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 17:27:12.107000 audit[3789]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3789 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:12.107000 audit[3789]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffef3c4770 a2=0 a3=1 items=0 ppid=3625 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.107000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 12 17:27:12.115000 audit[3792]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3792 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:12.115000 audit[3792]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc7146d60 a2=0 a3=1 items=0 ppid=3625 pid=3792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.115000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 12 17:27:12.123000 audit[3795]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3795 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:12.123000 audit[3795]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff6cadad0 a2=0 a3=1 items=0 ppid=3625 pid=3795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.123000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Dec 12 17:27:12.126000 audit[3796]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3796 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:12.126000 audit[3796]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe09603a0 a2=0 a3=1 items=0 ppid=3625 pid=3796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.126000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 17:27:12.131000 audit[3798]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3798 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:12.131000 audit[3798]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff9c18f20 a2=0 a3=1 items=0 ppid=3625 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.131000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:27:12.141000 audit[3801]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3801 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:12.141000 audit[3801]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcf9ca000 a2=0 a3=1 items=0 ppid=3625 pid=3801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.141000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:27:12.144000 audit[3802]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3802 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:12.144000 audit[3802]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe1ae4630 a2=0 a3=1 items=0 ppid=3625 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.144000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 17:27:12.150000 audit[3804]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3804 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:12.150000 audit[3804]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffce2e8470 a2=0 a3=1 items=0 ppid=3625 pid=3804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.150000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 17:27:12.152000 audit[3805]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3805 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:12.152000 audit[3805]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffece414e0 a2=0 a3=1 items=0 ppid=3625 pid=3805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.152000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 17:27:12.157000 audit[3807]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3807 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:12.157000 audit[3807]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd4974b00 a2=0 a3=1 items=0 ppid=3625 pid=3807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.157000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:27:12.165000 audit[3810]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3810 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:27:12.165000 audit[3810]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffdb0c0350 a2=0 a3=1 items=0 ppid=3625 pid=3810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.165000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:27:12.172000 audit[3812]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3812 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 17:27:12.172000 audit[3812]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffdf99c830 a2=0 a3=1 items=0 ppid=3625 pid=3812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.172000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:12.174000 audit[3812]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3812 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 17:27:12.174000 audit[3812]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffdf99c830 a2=0 a3=1 items=0 ppid=3625 pid=3812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:12.174000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:12.944282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4250039052.mount: Deactivated successfully. Dec 12 17:27:13.689146 containerd[2012]: time="2025-12-12T17:27:13.689100421Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:13.694131 containerd[2012]: time="2025-12-12T17:27:13.694051945Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 12 17:27:13.696376 containerd[2012]: time="2025-12-12T17:27:13.696304693Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:13.702630 containerd[2012]: time="2025-12-12T17:27:13.702548329Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:13.705264 containerd[2012]: time="2025-12-12T17:27:13.705196201Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.257048535s" Dec 12 17:27:13.705264 containerd[2012]: time="2025-12-12T17:27:13.705252709Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 12 17:27:13.714991 containerd[2012]: time="2025-12-12T17:27:13.714888625Z" level=info msg="CreateContainer within sandbox \"b5525fbba541687bc4c990607e68864a00a0dc3054f1b1c97dc82f9721e585d3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 17:27:13.737921 containerd[2012]: time="2025-12-12T17:27:13.736237897Z" level=info msg="Container d8102fc62d1086a5601ac09860a8b0f08fce17e17114be647f75d26fa1b2ffc3: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:27:13.746735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2370809188.mount: Deactivated successfully. Dec 12 17:27:13.753472 containerd[2012]: time="2025-12-12T17:27:13.753296713Z" level=info msg="CreateContainer within sandbox \"b5525fbba541687bc4c990607e68864a00a0dc3054f1b1c97dc82f9721e585d3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d8102fc62d1086a5601ac09860a8b0f08fce17e17114be647f75d26fa1b2ffc3\"" Dec 12 17:27:13.755398 containerd[2012]: time="2025-12-12T17:27:13.755333221Z" level=info msg="StartContainer for \"d8102fc62d1086a5601ac09860a8b0f08fce17e17114be647f75d26fa1b2ffc3\"" Dec 12 17:27:13.759446 containerd[2012]: time="2025-12-12T17:27:13.759378469Z" level=info msg="connecting to shim d8102fc62d1086a5601ac09860a8b0f08fce17e17114be647f75d26fa1b2ffc3" address="unix:///run/containerd/s/cc8da83a028868cc65001f3a3be703833c91d412b00ce5ea61956dcae2d40e79" protocol=ttrpc version=3 Dec 12 17:27:13.798439 systemd[1]: Started cri-containerd-d8102fc62d1086a5601ac09860a8b0f08fce17e17114be647f75d26fa1b2ffc3.scope - libcontainer container d8102fc62d1086a5601ac09860a8b0f08fce17e17114be647f75d26fa1b2ffc3. Dec 12 17:27:13.822000 audit: BPF prog-id=153 op=LOAD Dec 12 17:27:13.823000 audit: BPF prog-id=154 op=LOAD Dec 12 17:27:13.823000 audit[3821]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000220180 a2=98 a3=0 items=0 ppid=3623 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:13.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438313032666336326431303836613536303161633039383630613862 Dec 12 17:27:13.823000 audit: BPF prog-id=154 op=UNLOAD Dec 12 17:27:13.823000 audit[3821]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3623 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:13.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438313032666336326431303836613536303161633039383630613862 Dec 12 17:27:13.824000 audit: BPF prog-id=155 op=LOAD Dec 12 17:27:13.824000 audit[3821]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40002203e8 a2=98 a3=0 items=0 ppid=3623 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:13.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438313032666336326431303836613536303161633039383630613862 Dec 12 17:27:13.824000 audit: BPF prog-id=156 op=LOAD Dec 12 17:27:13.824000 audit[3821]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000220168 a2=98 a3=0 items=0 ppid=3623 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:13.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438313032666336326431303836613536303161633039383630613862 Dec 12 17:27:13.824000 audit: BPF prog-id=156 op=UNLOAD Dec 12 17:27:13.824000 audit[3821]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3623 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:13.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438313032666336326431303836613536303161633039383630613862 Dec 12 17:27:13.824000 audit: BPF prog-id=155 op=UNLOAD Dec 12 17:27:13.824000 audit[3821]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3623 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:13.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438313032666336326431303836613536303161633039383630613862 Dec 12 17:27:13.825000 audit: BPF prog-id=157 op=LOAD Dec 12 17:27:13.825000 audit[3821]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000220648 a2=98 a3=0 items=0 ppid=3623 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:13.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438313032666336326431303836613536303161633039383630613862 Dec 12 17:27:13.866156 containerd[2012]: time="2025-12-12T17:27:13.866091614Z" level=info msg="StartContainer for \"d8102fc62d1086a5601ac09860a8b0f08fce17e17114be647f75d26fa1b2ffc3\" returns successfully" Dec 12 17:27:14.615482 kubelet[3325]: I1212 17:27:14.614926 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-44tt8" podStartSLOduration=4.614904205 podStartE2EDuration="4.614904205s" podCreationTimestamp="2025-12-12 17:27:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:27:11.603586498 +0000 UTC m=+6.626039230" watchObservedRunningTime="2025-12-12 17:27:14.614904205 +0000 UTC m=+9.637356865" Dec 12 17:27:15.850184 kubelet[3325]: I1212 17:27:15.849640 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-kk54b" podStartSLOduration=3.588414288 podStartE2EDuration="5.849617271s" podCreationTimestamp="2025-12-12 17:27:10 +0000 UTC" firstStartedPulling="2025-12-12 17:27:11.445351006 +0000 UTC m=+6.467803654" lastFinishedPulling="2025-12-12 17:27:13.706554001 +0000 UTC m=+8.729006637" observedRunningTime="2025-12-12 17:27:14.616111681 +0000 UTC m=+9.638564305" watchObservedRunningTime="2025-12-12 17:27:15.849617271 +0000 UTC m=+10.872069931" Dec 12 17:27:21.337241 sudo[2352]: pam_unix(sudo:session): session closed for user root Dec 12 17:27:21.336000 audit[2352]: USER_END pid=2352 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:27:21.338658 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 12 17:27:21.338741 kernel: audit: type=1106 audit(1765560441.336:533): pid=2352 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:27:21.344000 audit[2352]: CRED_DISP pid=2352 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:27:21.353832 kernel: audit: type=1104 audit(1765560441.344:534): pid=2352 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:27:21.369066 sshd[2351]: Connection closed by 139.178.68.195 port 39150 Dec 12 17:27:21.372141 sshd-session[2348]: pam_unix(sshd:session): session closed for user core Dec 12 17:27:21.376000 audit[2348]: USER_END pid=2348 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:27:21.388256 systemd[1]: sshd@6-172.31.16.55:22-139.178.68.195:39150.service: Deactivated successfully. Dec 12 17:27:21.376000 audit[2348]: CRED_DISP pid=2348 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:27:21.394954 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 17:27:21.395559 systemd[1]: session-7.scope: Consumed 10.898s CPU time, 221.5M memory peak. Dec 12 17:27:21.397079 kernel: audit: type=1106 audit(1765560441.376:535): pid=2348 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:27:21.397173 kernel: audit: type=1104 audit(1765560441.376:536): pid=2348 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:27:21.403083 systemd-logind[1962]: Session 7 logged out. Waiting for processes to exit. Dec 12 17:27:21.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.16.55:22-139.178.68.195:39150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:21.409979 kernel: audit: type=1131 audit(1765560441.389:537): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.16.55:22-139.178.68.195:39150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:21.413252 systemd-logind[1962]: Removed session 7. Dec 12 17:27:23.891000 audit[3902]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:23.891000 audit[3902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffcfe62a10 a2=0 a3=1 items=0 ppid=3625 pid=3902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:23.907096 kernel: audit: type=1325 audit(1765560443.891:538): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:23.907231 kernel: audit: type=1300 audit(1765560443.891:538): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffcfe62a10 a2=0 a3=1 items=0 ppid=3625 pid=3902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:23.891000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:23.914330 kernel: audit: type=1327 audit(1765560443.891:538): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:23.902000 audit[3902]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:23.919227 kernel: audit: type=1325 audit(1765560443.902:539): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:23.902000 audit[3902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcfe62a10 a2=0 a3=1 items=0 ppid=3625 pid=3902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:23.928470 kernel: audit: type=1300 audit(1765560443.902:539): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcfe62a10 a2=0 a3=1 items=0 ppid=3625 pid=3902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:23.902000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:24.972000 audit[3904]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3904 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:24.972000 audit[3904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd95e4360 a2=0 a3=1 items=0 ppid=3625 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.972000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:24.980000 audit[3904]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3904 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:24.980000 audit[3904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd95e4360 a2=0 a3=1 items=0 ppid=3625 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.980000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:33.165000 audit[3907]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3907 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:33.167823 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 12 17:27:33.167944 kernel: audit: type=1325 audit(1765560453.165:542): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3907 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:33.165000 audit[3907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff5dce4d0 a2=0 a3=1 items=0 ppid=3625 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:33.179804 kernel: audit: type=1300 audit(1765560453.165:542): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff5dce4d0 a2=0 a3=1 items=0 ppid=3625 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:33.165000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:33.185676 kernel: audit: type=1327 audit(1765560453.165:542): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:33.186938 kernel: audit: type=1325 audit(1765560453.185:543): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3907 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:33.185000 audit[3907]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3907 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:33.185000 audit[3907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff5dce4d0 a2=0 a3=1 items=0 ppid=3625 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:33.198378 kernel: audit: type=1300 audit(1765560453.185:543): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff5dce4d0 a2=0 a3=1 items=0 ppid=3625 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:33.185000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:33.204663 kernel: audit: type=1327 audit(1765560453.185:543): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:34.289000 audit[3910]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3910 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:34.289000 audit[3910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffffa561e0 a2=0 a3=1 items=0 ppid=3625 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:34.302574 kernel: audit: type=1325 audit(1765560454.289:544): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3910 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:34.302728 kernel: audit: type=1300 audit(1765560454.289:544): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffffa561e0 a2=0 a3=1 items=0 ppid=3625 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:34.289000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:34.310707 kernel: audit: type=1327 audit(1765560454.289:544): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:34.310000 audit[3910]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3910 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:34.317896 kernel: audit: type=1325 audit(1765560454.310:545): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3910 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:34.310000 audit[3910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffffa561e0 a2=0 a3=1 items=0 ppid=3625 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:34.310000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:37.609764 systemd[1]: Created slice kubepods-besteffort-podf41afd9a_6bdf_465d_8c47_443cd7291a02.slice - libcontainer container kubepods-besteffort-podf41afd9a_6bdf_465d_8c47_443cd7291a02.slice. Dec 12 17:27:37.651000 audit[3912]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3912 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:37.651000 audit[3912]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffca1480f0 a2=0 a3=1 items=0 ppid=3625 pid=3912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:37.651000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:37.665000 audit[3912]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3912 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:37.665000 audit[3912]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffca1480f0 a2=0 a3=1 items=0 ppid=3625 pid=3912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:37.665000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:37.751205 kubelet[3325]: I1212 17:27:37.751138 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f41afd9a-6bdf-465d-8c47-443cd7291a02-tigera-ca-bundle\") pod \"calico-typha-6cfd576759-q5g9f\" (UID: \"f41afd9a-6bdf-465d-8c47-443cd7291a02\") " pod="calico-system/calico-typha-6cfd576759-q5g9f" Dec 12 17:27:37.751205 kubelet[3325]: I1212 17:27:37.751214 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f41afd9a-6bdf-465d-8c47-443cd7291a02-typha-certs\") pod \"calico-typha-6cfd576759-q5g9f\" (UID: \"f41afd9a-6bdf-465d-8c47-443cd7291a02\") " pod="calico-system/calico-typha-6cfd576759-q5g9f" Dec 12 17:27:37.752065 kubelet[3325]: I1212 17:27:37.751262 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6v9j\" (UniqueName: \"kubernetes.io/projected/f41afd9a-6bdf-465d-8c47-443cd7291a02-kube-api-access-q6v9j\") pod \"calico-typha-6cfd576759-q5g9f\" (UID: \"f41afd9a-6bdf-465d-8c47-443cd7291a02\") " pod="calico-system/calico-typha-6cfd576759-q5g9f" Dec 12 17:27:37.895475 systemd[1]: Created slice kubepods-besteffort-pod8966c8bc_8954_4798_afcb_cd6d1ce83928.slice - libcontainer container kubepods-besteffort-pod8966c8bc_8954_4798_afcb_cd6d1ce83928.slice. Dec 12 17:27:37.933066 containerd[2012]: time="2025-12-12T17:27:37.932749009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cfd576759-q5g9f,Uid:f41afd9a-6bdf-465d-8c47-443cd7291a02,Namespace:calico-system,Attempt:0,}" Dec 12 17:27:37.953437 kubelet[3325]: I1212 17:27:37.953326 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8966c8bc-8954-4798-afcb-cd6d1ce83928-cni-bin-dir\") pod \"calico-node-mn7dh\" (UID: \"8966c8bc-8954-4798-afcb-cd6d1ce83928\") " pod="calico-system/calico-node-mn7dh" Dec 12 17:27:37.953605 kubelet[3325]: I1212 17:27:37.953495 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8966c8bc-8954-4798-afcb-cd6d1ce83928-cni-log-dir\") pod \"calico-node-mn7dh\" (UID: \"8966c8bc-8954-4798-afcb-cd6d1ce83928\") " pod="calico-system/calico-node-mn7dh" Dec 12 17:27:37.953605 kubelet[3325]: I1212 17:27:37.953582 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8966c8bc-8954-4798-afcb-cd6d1ce83928-cni-net-dir\") pod \"calico-node-mn7dh\" (UID: \"8966c8bc-8954-4798-afcb-cd6d1ce83928\") " pod="calico-system/calico-node-mn7dh" Dec 12 17:27:37.953990 kubelet[3325]: I1212 17:27:37.953667 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8966c8bc-8954-4798-afcb-cd6d1ce83928-tigera-ca-bundle\") pod \"calico-node-mn7dh\" (UID: \"8966c8bc-8954-4798-afcb-cd6d1ce83928\") " pod="calico-system/calico-node-mn7dh" Dec 12 17:27:37.953990 kubelet[3325]: I1212 17:27:37.953757 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8966c8bc-8954-4798-afcb-cd6d1ce83928-lib-modules\") pod \"calico-node-mn7dh\" (UID: \"8966c8bc-8954-4798-afcb-cd6d1ce83928\") " pod="calico-system/calico-node-mn7dh" Dec 12 17:27:37.953990 kubelet[3325]: I1212 17:27:37.953940 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8966c8bc-8954-4798-afcb-cd6d1ce83928-var-lib-calico\") pod \"calico-node-mn7dh\" (UID: \"8966c8bc-8954-4798-afcb-cd6d1ce83928\") " pod="calico-system/calico-node-mn7dh" Dec 12 17:27:37.954231 kubelet[3325]: I1212 17:27:37.954031 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz75l\" (UniqueName: \"kubernetes.io/projected/8966c8bc-8954-4798-afcb-cd6d1ce83928-kube-api-access-gz75l\") pod \"calico-node-mn7dh\" (UID: \"8966c8bc-8954-4798-afcb-cd6d1ce83928\") " pod="calico-system/calico-node-mn7dh" Dec 12 17:27:37.954231 kubelet[3325]: I1212 17:27:37.954117 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8966c8bc-8954-4798-afcb-cd6d1ce83928-flexvol-driver-host\") pod \"calico-node-mn7dh\" (UID: \"8966c8bc-8954-4798-afcb-cd6d1ce83928\") " pod="calico-system/calico-node-mn7dh" Dec 12 17:27:37.954437 kubelet[3325]: I1212 17:27:37.954266 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8966c8bc-8954-4798-afcb-cd6d1ce83928-policysync\") pod \"calico-node-mn7dh\" (UID: \"8966c8bc-8954-4798-afcb-cd6d1ce83928\") " pod="calico-system/calico-node-mn7dh" Dec 12 17:27:37.954497 kubelet[3325]: I1212 17:27:37.954380 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8966c8bc-8954-4798-afcb-cd6d1ce83928-node-certs\") pod \"calico-node-mn7dh\" (UID: \"8966c8bc-8954-4798-afcb-cd6d1ce83928\") " pod="calico-system/calico-node-mn7dh" Dec 12 17:27:37.954589 kubelet[3325]: I1212 17:27:37.954526 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8966c8bc-8954-4798-afcb-cd6d1ce83928-var-run-calico\") pod \"calico-node-mn7dh\" (UID: \"8966c8bc-8954-4798-afcb-cd6d1ce83928\") " pod="calico-system/calico-node-mn7dh" Dec 12 17:27:37.954649 kubelet[3325]: I1212 17:27:37.954616 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8966c8bc-8954-4798-afcb-cd6d1ce83928-xtables-lock\") pod \"calico-node-mn7dh\" (UID: \"8966c8bc-8954-4798-afcb-cd6d1ce83928\") " pod="calico-system/calico-node-mn7dh" Dec 12 17:27:37.997384 containerd[2012]: time="2025-12-12T17:27:37.997234741Z" level=info msg="connecting to shim 1c823c57156398379747d0a0c0ffb28ea78646c5c332b0f7388d3bb11dcded4e" address="unix:///run/containerd/s/0bb6239d85654ef57b816b55a352bb1c2b1e0d800ae22b8d913b66af6a9e9aae" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:38.064892 kubelet[3325]: E1212 17:27:38.062966 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.064892 kubelet[3325]: W1212 17:27:38.063021 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.064892 kubelet[3325]: E1212 17:27:38.063056 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.064892 kubelet[3325]: E1212 17:27:38.064261 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.064892 kubelet[3325]: W1212 17:27:38.064405 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.064892 kubelet[3325]: E1212 17:27:38.064440 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.067791 kubelet[3325]: E1212 17:27:38.067740 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.068038 kubelet[3325]: W1212 17:27:38.067779 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.068038 kubelet[3325]: E1212 17:27:38.067931 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.072202 kubelet[3325]: E1212 17:27:38.071070 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.072202 kubelet[3325]: W1212 17:27:38.071228 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.072202 kubelet[3325]: E1212 17:27:38.071377 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.074356 kubelet[3325]: E1212 17:27:38.072566 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.074356 kubelet[3325]: W1212 17:27:38.074151 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.074356 kubelet[3325]: E1212 17:27:38.074316 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.076877 kubelet[3325]: E1212 17:27:38.076806 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.077051 kubelet[3325]: W1212 17:27:38.076901 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.077051 kubelet[3325]: E1212 17:27:38.076938 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.101355 kubelet[3325]: E1212 17:27:38.101294 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.101355 kubelet[3325]: W1212 17:27:38.101333 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.101557 kubelet[3325]: E1212 17:27:38.101390 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.117179 systemd[1]: Started cri-containerd-1c823c57156398379747d0a0c0ffb28ea78646c5c332b0f7388d3bb11dcded4e.scope - libcontainer container 1c823c57156398379747d0a0c0ffb28ea78646c5c332b0f7388d3bb11dcded4e. Dec 12 17:27:38.135160 kubelet[3325]: E1212 17:27:38.135108 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.135160 kubelet[3325]: W1212 17:27:38.135146 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.135368 kubelet[3325]: E1212 17:27:38.135188 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.153526 kubelet[3325]: E1212 17:27:38.152382 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hb6pw" podUID="8797b6d6-7a5e-4865-91c2-2bd3d90f57cf" Dec 12 17:27:38.184412 kubelet[3325]: E1212 17:27:38.184282 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.184412 kubelet[3325]: W1212 17:27:38.184339 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.184412 kubelet[3325]: E1212 17:27:38.184371 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.185328 kubelet[3325]: E1212 17:27:38.185242 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.185573 kubelet[3325]: W1212 17:27:38.185275 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.185757 kubelet[3325]: E1212 17:27:38.185544 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.186977 kubelet[3325]: E1212 17:27:38.186930 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.187350 kubelet[3325]: W1212 17:27:38.187295 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.187445 kubelet[3325]: E1212 17:27:38.187348 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.188695 kubelet[3325]: E1212 17:27:38.188616 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.188695 kubelet[3325]: W1212 17:27:38.188660 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.188695 kubelet[3325]: E1212 17:27:38.188692 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.189968 kubelet[3325]: E1212 17:27:38.189922 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.189968 kubelet[3325]: W1212 17:27:38.189961 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.190521 kubelet[3325]: E1212 17:27:38.189993 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.192040 kubelet[3325]: E1212 17:27:38.191991 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.192040 kubelet[3325]: W1212 17:27:38.192028 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.192040 kubelet[3325]: E1212 17:27:38.192060 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.192766 kubelet[3325]: E1212 17:27:38.192728 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.192766 kubelet[3325]: W1212 17:27:38.192760 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.193138 kubelet[3325]: E1212 17:27:38.192789 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.194191 kubelet[3325]: E1212 17:27:38.194141 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.194191 kubelet[3325]: W1212 17:27:38.194182 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.194596 kubelet[3325]: E1212 17:27:38.194216 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.194984 kubelet[3325]: E1212 17:27:38.194893 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.194984 kubelet[3325]: W1212 17:27:38.194931 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.194984 kubelet[3325]: E1212 17:27:38.194961 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.195825 kubelet[3325]: E1212 17:27:38.195720 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.196278 kubelet[3325]: W1212 17:27:38.196229 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.196408 kubelet[3325]: E1212 17:27:38.196371 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.197760 kubelet[3325]: E1212 17:27:38.197642 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.197760 kubelet[3325]: W1212 17:27:38.197682 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.197760 kubelet[3325]: E1212 17:27:38.197715 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.198625 kubelet[3325]: E1212 17:27:38.198580 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.198625 kubelet[3325]: W1212 17:27:38.198615 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.198838 kubelet[3325]: E1212 17:27:38.198645 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.199437 kubelet[3325]: E1212 17:27:38.199394 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.199437 kubelet[3325]: W1212 17:27:38.199426 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.199606 kubelet[3325]: E1212 17:27:38.199454 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.199823 kubelet[3325]: E1212 17:27:38.199788 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.199948 kubelet[3325]: W1212 17:27:38.199910 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.200128 kubelet[3325]: E1212 17:27:38.200071 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.201093 kubelet[3325]: E1212 17:27:38.201038 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.201093 kubelet[3325]: W1212 17:27:38.201076 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.201307 kubelet[3325]: E1212 17:27:38.201109 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.202898 kubelet[3325]: E1212 17:27:38.202814 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.204026 kubelet[3325]: W1212 17:27:38.203967 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.204026 kubelet[3325]: E1212 17:27:38.204025 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.204586 kubelet[3325]: E1212 17:27:38.204548 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.204586 kubelet[3325]: W1212 17:27:38.204580 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.204763 kubelet[3325]: E1212 17:27:38.204608 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.206166 kubelet[3325]: E1212 17:27:38.206114 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.206166 kubelet[3325]: W1212 17:27:38.206153 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.206166 kubelet[3325]: E1212 17:27:38.206186 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.206961 kubelet[3325]: E1212 17:27:38.206916 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.206961 kubelet[3325]: W1212 17:27:38.206953 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.207232 kubelet[3325]: E1212 17:27:38.206985 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.209157 kubelet[3325]: E1212 17:27:38.209105 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.209157 kubelet[3325]: W1212 17:27:38.209145 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.209378 kubelet[3325]: E1212 17:27:38.209179 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.216590 containerd[2012]: time="2025-12-12T17:27:38.216503435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mn7dh,Uid:8966c8bc-8954-4798-afcb-cd6d1ce83928,Namespace:calico-system,Attempt:0,}" Dec 12 17:27:38.235000 audit: BPF prog-id=158 op=LOAD Dec 12 17:27:38.237429 kernel: kauditd_printk_skb: 8 callbacks suppressed Dec 12 17:27:38.237536 kernel: audit: type=1334 audit(1765560458.235:548): prog-id=158 op=LOAD Dec 12 17:27:38.239000 audit: BPF prog-id=159 op=LOAD Dec 12 17:27:38.243580 kernel: audit: type=1334 audit(1765560458.239:549): prog-id=159 op=LOAD Dec 12 17:27:38.244740 kernel: audit: type=1300 audit(1765560458.239:549): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3923 pid=3935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:38.239000 audit[3935]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3923 pid=3935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:38.260177 kernel: audit: type=1327 audit(1765560458.239:549): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163383233633537313536333938333739373437643061306330666662 Dec 12 17:27:38.260313 kernel: audit: type=1334 audit(1765560458.240:550): prog-id=159 op=UNLOAD Dec 12 17:27:38.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163383233633537313536333938333739373437643061306330666662 Dec 12 17:27:38.240000 audit: BPF prog-id=159 op=UNLOAD Dec 12 17:27:38.260524 kubelet[3325]: E1212 17:27:38.257970 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.260524 kubelet[3325]: W1212 17:27:38.257995 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.260524 kubelet[3325]: E1212 17:27:38.258059 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.260524 kubelet[3325]: I1212 17:27:38.258134 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8797b6d6-7a5e-4865-91c2-2bd3d90f57cf-kubelet-dir\") pod \"csi-node-driver-hb6pw\" (UID: \"8797b6d6-7a5e-4865-91c2-2bd3d90f57cf\") " pod="calico-system/csi-node-driver-hb6pw" Dec 12 17:27:38.260524 kubelet[3325]: E1212 17:27:38.258697 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.260524 kubelet[3325]: W1212 17:27:38.258731 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.260524 kubelet[3325]: E1212 17:27:38.258762 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.260524 kubelet[3325]: I1212 17:27:38.258808 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vz24\" (UniqueName: \"kubernetes.io/projected/8797b6d6-7a5e-4865-91c2-2bd3d90f57cf-kube-api-access-4vz24\") pod \"csi-node-driver-hb6pw\" (UID: \"8797b6d6-7a5e-4865-91c2-2bd3d90f57cf\") " pod="calico-system/csi-node-driver-hb6pw" Dec 12 17:27:38.240000 audit[3935]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3923 pid=3935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:38.272672 kernel: audit: type=1300 audit(1765560458.240:550): arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3923 pid=3935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:38.273999 kernel: audit: type=1327 audit(1765560458.240:550): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163383233633537313536333938333739373437643061306330666662 Dec 12 17:27:38.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163383233633537313536333938333739373437643061306330666662 Dec 12 17:27:38.274135 kubelet[3325]: E1212 17:27:38.267349 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.274135 kubelet[3325]: W1212 17:27:38.267381 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.274135 kubelet[3325]: E1212 17:27:38.267414 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.274135 kubelet[3325]: I1212 17:27:38.267458 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8797b6d6-7a5e-4865-91c2-2bd3d90f57cf-socket-dir\") pod \"csi-node-driver-hb6pw\" (UID: \"8797b6d6-7a5e-4865-91c2-2bd3d90f57cf\") " pod="calico-system/csi-node-driver-hb6pw" Dec 12 17:27:38.274135 kubelet[3325]: E1212 17:27:38.268186 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.274135 kubelet[3325]: W1212 17:27:38.268215 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.274135 kubelet[3325]: E1212 17:27:38.268247 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.274135 kubelet[3325]: I1212 17:27:38.268288 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8797b6d6-7a5e-4865-91c2-2bd3d90f57cf-varrun\") pod \"csi-node-driver-hb6pw\" (UID: \"8797b6d6-7a5e-4865-91c2-2bd3d90f57cf\") " pod="calico-system/csi-node-driver-hb6pw" Dec 12 17:27:38.274135 kubelet[3325]: E1212 17:27:38.268669 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.274600 kubelet[3325]: W1212 17:27:38.268691 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.274600 kubelet[3325]: E1212 17:27:38.268716 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.274600 kubelet[3325]: I1212 17:27:38.268750 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8797b6d6-7a5e-4865-91c2-2bd3d90f57cf-registration-dir\") pod \"csi-node-driver-hb6pw\" (UID: \"8797b6d6-7a5e-4865-91c2-2bd3d90f57cf\") " pod="calico-system/csi-node-driver-hb6pw" Dec 12 17:27:38.274600 kubelet[3325]: E1212 17:27:38.269379 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.274600 kubelet[3325]: W1212 17:27:38.269412 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.274600 kubelet[3325]: E1212 17:27:38.269441 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.274600 kubelet[3325]: E1212 17:27:38.272341 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.274600 kubelet[3325]: W1212 17:27:38.272370 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.274600 kubelet[3325]: E1212 17:27:38.272401 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.279394 kubelet[3325]: E1212 17:27:38.279215 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.279394 kubelet[3325]: W1212 17:27:38.279241 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.279394 kubelet[3325]: E1212 17:27:38.279272 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.280159 kubelet[3325]: E1212 17:27:38.280080 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.280159 kubelet[3325]: W1212 17:27:38.280129 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.280159 kubelet[3325]: E1212 17:27:38.280161 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.281453 kubelet[3325]: E1212 17:27:38.280657 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.281453 kubelet[3325]: W1212 17:27:38.280679 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.281453 kubelet[3325]: E1212 17:27:38.280707 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.283282 kernel: audit: type=1334 audit(1765560458.241:551): prog-id=160 op=LOAD Dec 12 17:27:38.284370 kernel: audit: type=1300 audit(1765560458.241:551): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3923 pid=3935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:38.241000 audit: BPF prog-id=160 op=LOAD Dec 12 17:27:38.241000 audit[3935]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3923 pid=3935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:38.284621 kubelet[3325]: E1212 17:27:38.283981 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.284621 kubelet[3325]: W1212 17:27:38.284009 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.284621 kubelet[3325]: E1212 17:27:38.284063 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.286680 kubelet[3325]: E1212 17:27:38.286411 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.286680 kubelet[3325]: W1212 17:27:38.286443 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.286680 kubelet[3325]: E1212 17:27:38.286486 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.292052 kubelet[3325]: E1212 17:27:38.291443 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.241000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163383233633537313536333938333739373437643061306330666662 Dec 12 17:27:38.292607 kubelet[3325]: W1212 17:27:38.292165 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.292607 kubelet[3325]: E1212 17:27:38.292225 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.299828 kernel: audit: type=1327 audit(1765560458.241:551): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163383233633537313536333938333739373437643061306330666662 Dec 12 17:27:38.243000 audit: BPF prog-id=161 op=LOAD Dec 12 17:27:38.243000 audit[3935]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3923 pid=3935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:38.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163383233633537313536333938333739373437643061306330666662 Dec 12 17:27:38.289000 audit: BPF prog-id=161 op=UNLOAD Dec 12 17:27:38.289000 audit[3935]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3923 pid=3935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:38.289000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163383233633537313536333938333739373437643061306330666662 Dec 12 17:27:38.291000 audit: BPF prog-id=160 op=UNLOAD Dec 12 17:27:38.291000 audit[3935]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3923 pid=3935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:38.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163383233633537313536333938333739373437643061306330666662 Dec 12 17:27:38.301673 kubelet[3325]: E1212 17:27:38.300609 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.291000 audit: BPF prog-id=162 op=LOAD Dec 12 17:27:38.291000 audit[3935]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3923 pid=3935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:38.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163383233633537313536333938333739373437643061306330666662 Dec 12 17:27:38.305165 kubelet[3325]: W1212 17:27:38.303363 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.305165 kubelet[3325]: E1212 17:27:38.303413 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.307283 kubelet[3325]: E1212 17:27:38.306102 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.307283 kubelet[3325]: W1212 17:27:38.306135 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.307283 kubelet[3325]: E1212 17:27:38.306166 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.321965 containerd[2012]: time="2025-12-12T17:27:38.321895067Z" level=info msg="connecting to shim 9e4da854595c868a55b9d490077f836f0b881df2208488604de6971b17a5d31d" address="unix:///run/containerd/s/2569307da30bec7ecbd04fac32bd3c6f69db0aa0cf328678d7694fcf409a49b0" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:38.370765 kubelet[3325]: E1212 17:27:38.370729 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.373822 kubelet[3325]: W1212 17:27:38.372423 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.373822 kubelet[3325]: E1212 17:27:38.372484 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.374262 kubelet[3325]: E1212 17:27:38.374167 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.374262 kubelet[3325]: W1212 17:27:38.374199 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.374262 kubelet[3325]: E1212 17:27:38.374232 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.378745 kubelet[3325]: E1212 17:27:38.377950 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.378745 kubelet[3325]: W1212 17:27:38.377988 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.378745 kubelet[3325]: E1212 17:27:38.378022 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.379460 kubelet[3325]: E1212 17:27:38.379334 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.382964 kubelet[3325]: W1212 17:27:38.381944 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.382964 kubelet[3325]: E1212 17:27:38.382053 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.382964 kubelet[3325]: E1212 17:27:38.382485 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.382964 kubelet[3325]: W1212 17:27:38.382506 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.382964 kubelet[3325]: E1212 17:27:38.382529 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.382964 kubelet[3325]: E1212 17:27:38.382953 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.383344 kubelet[3325]: W1212 17:27:38.382971 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.383344 kubelet[3325]: E1212 17:27:38.383005 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.388256 kubelet[3325]: E1212 17:27:38.387923 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.388256 kubelet[3325]: W1212 17:27:38.387992 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.388256 kubelet[3325]: E1212 17:27:38.388054 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.389532 kubelet[3325]: E1212 17:27:38.388691 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.389532 kubelet[3325]: W1212 17:27:38.388725 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.389532 kubelet[3325]: E1212 17:27:38.388783 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.389532 kubelet[3325]: E1212 17:27:38.389360 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.389532 kubelet[3325]: W1212 17:27:38.389423 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.389532 kubelet[3325]: E1212 17:27:38.389513 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.390887 kubelet[3325]: E1212 17:27:38.390096 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.390887 kubelet[3325]: W1212 17:27:38.390129 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.390887 kubelet[3325]: E1212 17:27:38.390157 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.392839 kubelet[3325]: E1212 17:27:38.392771 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.393026 kubelet[3325]: W1212 17:27:38.392839 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.393026 kubelet[3325]: E1212 17:27:38.392933 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.394676 kubelet[3325]: E1212 17:27:38.393680 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.394676 kubelet[3325]: W1212 17:27:38.393750 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.394676 kubelet[3325]: E1212 17:27:38.393808 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.394676 kubelet[3325]: E1212 17:27:38.394606 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.394676 kubelet[3325]: W1212 17:27:38.394630 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.394676 kubelet[3325]: E1212 17:27:38.394681 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.398832 kubelet[3325]: E1212 17:27:38.396577 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.398832 kubelet[3325]: W1212 17:27:38.396616 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.398832 kubelet[3325]: E1212 17:27:38.396653 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.398832 kubelet[3325]: E1212 17:27:38.397281 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.398832 kubelet[3325]: W1212 17:27:38.397312 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.398832 kubelet[3325]: E1212 17:27:38.397341 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.398832 kubelet[3325]: E1212 17:27:38.397840 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.398832 kubelet[3325]: W1212 17:27:38.397932 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.398832 kubelet[3325]: E1212 17:27:38.397959 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.398832 kubelet[3325]: E1212 17:27:38.398447 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.399496 kubelet[3325]: W1212 17:27:38.398471 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.399496 kubelet[3325]: E1212 17:27:38.398509 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.401931 kubelet[3325]: E1212 17:27:38.401782 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.401931 kubelet[3325]: W1212 17:27:38.401820 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.403311 kubelet[3325]: E1212 17:27:38.401898 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.405891 kubelet[3325]: E1212 17:27:38.403765 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.405891 kubelet[3325]: W1212 17:27:38.403809 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.405891 kubelet[3325]: E1212 17:27:38.403842 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.405891 kubelet[3325]: E1212 17:27:38.404550 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.405891 kubelet[3325]: W1212 17:27:38.404573 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.405891 kubelet[3325]: E1212 17:27:38.404602 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.408741 kubelet[3325]: E1212 17:27:38.408220 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.408741 kubelet[3325]: W1212 17:27:38.408262 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.408741 kubelet[3325]: E1212 17:27:38.408297 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.411874 kubelet[3325]: E1212 17:27:38.411789 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.412039 kubelet[3325]: W1212 17:27:38.411833 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.412039 kubelet[3325]: E1212 17:27:38.411928 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.412432 kubelet[3325]: E1212 17:27:38.412202 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.412432 kubelet[3325]: W1212 17:27:38.412242 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.412432 kubelet[3325]: E1212 17:27:38.412265 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.412598 kubelet[3325]: E1212 17:27:38.412522 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.412598 kubelet[3325]: W1212 17:27:38.412549 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.412598 kubelet[3325]: E1212 17:27:38.412569 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.413261 systemd[1]: Started cri-containerd-9e4da854595c868a55b9d490077f836f0b881df2208488604de6971b17a5d31d.scope - libcontainer container 9e4da854595c868a55b9d490077f836f0b881df2208488604de6971b17a5d31d. Dec 12 17:27:38.420099 kubelet[3325]: E1212 17:27:38.415659 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.420099 kubelet[3325]: W1212 17:27:38.415686 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.420099 kubelet[3325]: E1212 17:27:38.415719 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.454509 kubelet[3325]: E1212 17:27:38.454444 3325 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:38.454692 kubelet[3325]: W1212 17:27:38.454610 3325 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:38.454818 kubelet[3325]: E1212 17:27:38.454647 3325 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:38.484000 audit: BPF prog-id=163 op=LOAD Dec 12 17:27:38.487000 audit: BPF prog-id=164 op=LOAD Dec 12 17:27:38.487000 audit[4030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4018 pid=4030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:38.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965346461383534353935633836386135356239643439303037376638 Dec 12 17:27:38.487000 audit: BPF prog-id=164 op=UNLOAD Dec 12 17:27:38.487000 audit[4030]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4018 pid=4030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:38.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965346461383534353935633836386135356239643439303037376638 Dec 12 17:27:38.488000 audit: BPF prog-id=165 op=LOAD Dec 12 17:27:38.488000 audit[4030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4018 pid=4030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:38.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965346461383534353935633836386135356239643439303037376638 Dec 12 17:27:38.488000 audit: BPF prog-id=166 op=LOAD Dec 12 17:27:38.488000 audit[4030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4018 pid=4030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:38.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965346461383534353935633836386135356239643439303037376638 Dec 12 17:27:38.489000 audit: BPF prog-id=166 op=UNLOAD Dec 12 17:27:38.489000 audit[4030]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4018 pid=4030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:38.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965346461383534353935633836386135356239643439303037376638 Dec 12 17:27:38.489000 audit: BPF prog-id=165 op=UNLOAD Dec 12 17:27:38.489000 audit[4030]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4018 pid=4030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:38.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965346461383534353935633836386135356239643439303037376638 Dec 12 17:27:38.490000 audit: BPF prog-id=167 op=LOAD Dec 12 17:27:38.490000 audit[4030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4018 pid=4030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:38.490000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965346461383534353935633836386135356239643439303037376638 Dec 12 17:27:38.575119 containerd[2012]: time="2025-12-12T17:27:38.575053404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mn7dh,Uid:8966c8bc-8954-4798-afcb-cd6d1ce83928,Namespace:calico-system,Attempt:0,} returns sandbox id \"9e4da854595c868a55b9d490077f836f0b881df2208488604de6971b17a5d31d\"" Dec 12 17:27:38.584266 containerd[2012]: time="2025-12-12T17:27:38.583447272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 17:27:38.603150 containerd[2012]: time="2025-12-12T17:27:38.603049824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cfd576759-q5g9f,Uid:f41afd9a-6bdf-465d-8c47-443cd7291a02,Namespace:calico-system,Attempt:0,} returns sandbox id \"1c823c57156398379747d0a0c0ffb28ea78646c5c332b0f7388d3bb11dcded4e\"" Dec 12 17:27:38.702000 audit[4091]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=4091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:38.702000 audit[4091]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc4d74820 a2=0 a3=1 items=0 ppid=3625 pid=4091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:38.702000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:38.712000 audit[4091]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=4091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:38.712000 audit[4091]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc4d74820 a2=0 a3=1 items=0 ppid=3625 pid=4091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:38.712000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:39.423881 kubelet[3325]: E1212 17:27:39.423389 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hb6pw" podUID="8797b6d6-7a5e-4865-91c2-2bd3d90f57cf" Dec 12 17:27:39.860113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1676099848.mount: Deactivated successfully. Dec 12 17:27:40.103774 containerd[2012]: time="2025-12-12T17:27:40.103678428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:40.108906 containerd[2012]: time="2025-12-12T17:27:40.108783828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:40.112125 containerd[2012]: time="2025-12-12T17:27:40.111738672Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:40.122875 containerd[2012]: time="2025-12-12T17:27:40.122768820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:40.129484 containerd[2012]: time="2025-12-12T17:27:40.128190288Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.543794896s" Dec 12 17:27:40.129484 containerd[2012]: time="2025-12-12T17:27:40.128265636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 12 17:27:40.133021 containerd[2012]: time="2025-12-12T17:27:40.132949428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 17:27:40.152124 containerd[2012]: time="2025-12-12T17:27:40.152047440Z" level=info msg="CreateContainer within sandbox \"9e4da854595c868a55b9d490077f836f0b881df2208488604de6971b17a5d31d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 17:27:40.191905 containerd[2012]: time="2025-12-12T17:27:40.189023472Z" level=info msg="Container 536c4f74b5e2e437dcf50594a70dc45d529eaaa259425d5ba2f4b4886b54fbab: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:27:40.209485 containerd[2012]: time="2025-12-12T17:27:40.209127504Z" level=info msg="CreateContainer within sandbox \"9e4da854595c868a55b9d490077f836f0b881df2208488604de6971b17a5d31d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"536c4f74b5e2e437dcf50594a70dc45d529eaaa259425d5ba2f4b4886b54fbab\"" Dec 12 17:27:40.213970 containerd[2012]: time="2025-12-12T17:27:40.211982484Z" level=info msg="StartContainer for \"536c4f74b5e2e437dcf50594a70dc45d529eaaa259425d5ba2f4b4886b54fbab\"" Dec 12 17:27:40.218070 containerd[2012]: time="2025-12-12T17:27:40.217983516Z" level=info msg="connecting to shim 536c4f74b5e2e437dcf50594a70dc45d529eaaa259425d5ba2f4b4886b54fbab" address="unix:///run/containerd/s/2569307da30bec7ecbd04fac32bd3c6f69db0aa0cf328678d7694fcf409a49b0" protocol=ttrpc version=3 Dec 12 17:27:40.271375 systemd[1]: Started cri-containerd-536c4f74b5e2e437dcf50594a70dc45d529eaaa259425d5ba2f4b4886b54fbab.scope - libcontainer container 536c4f74b5e2e437dcf50594a70dc45d529eaaa259425d5ba2f4b4886b54fbab. Dec 12 17:27:40.339000 audit: BPF prog-id=168 op=LOAD Dec 12 17:27:40.339000 audit[4100]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001fe3e8 a2=98 a3=0 items=0 ppid=4018 pid=4100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:40.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533366334663734623565326534333764636635303539346137306463 Dec 12 17:27:40.339000 audit: BPF prog-id=169 op=LOAD Dec 12 17:27:40.339000 audit[4100]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001fe168 a2=98 a3=0 items=0 ppid=4018 pid=4100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:40.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533366334663734623565326534333764636635303539346137306463 Dec 12 17:27:40.340000 audit: BPF prog-id=169 op=UNLOAD Dec 12 17:27:40.340000 audit[4100]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4018 pid=4100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:40.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533366334663734623565326534333764636635303539346137306463 Dec 12 17:27:40.340000 audit: BPF prog-id=168 op=UNLOAD Dec 12 17:27:40.340000 audit[4100]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4018 pid=4100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:40.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533366334663734623565326534333764636635303539346137306463 Dec 12 17:27:40.340000 audit: BPF prog-id=170 op=LOAD Dec 12 17:27:40.340000 audit[4100]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001fe648 a2=98 a3=0 items=0 ppid=4018 pid=4100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:40.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533366334663734623565326534333764636635303539346137306463 Dec 12 17:27:40.384158 containerd[2012]: time="2025-12-12T17:27:40.383985457Z" level=info msg="StartContainer for \"536c4f74b5e2e437dcf50594a70dc45d529eaaa259425d5ba2f4b4886b54fbab\" returns successfully" Dec 12 17:27:40.410889 systemd[1]: cri-containerd-536c4f74b5e2e437dcf50594a70dc45d529eaaa259425d5ba2f4b4886b54fbab.scope: Deactivated successfully. Dec 12 17:27:40.414000 audit: BPF prog-id=170 op=UNLOAD Dec 12 17:27:40.419367 containerd[2012]: time="2025-12-12T17:27:40.419257981Z" level=info msg="received container exit event container_id:\"536c4f74b5e2e437dcf50594a70dc45d529eaaa259425d5ba2f4b4886b54fbab\" id:\"536c4f74b5e2e437dcf50594a70dc45d529eaaa259425d5ba2f4b4886b54fbab\" pid:4114 exited_at:{seconds:1765560460 nanos:418264189}" Dec 12 17:27:40.475647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-536c4f74b5e2e437dcf50594a70dc45d529eaaa259425d5ba2f4b4886b54fbab-rootfs.mount: Deactivated successfully. Dec 12 17:27:41.429806 kubelet[3325]: E1212 17:27:41.429740 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hb6pw" podUID="8797b6d6-7a5e-4865-91c2-2bd3d90f57cf" Dec 12 17:27:42.138611 containerd[2012]: time="2025-12-12T17:27:42.137435726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:42.139334 containerd[2012]: time="2025-12-12T17:27:42.139267982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Dec 12 17:27:42.140395 containerd[2012]: time="2025-12-12T17:27:42.140345198Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:42.145644 containerd[2012]: time="2025-12-12T17:27:42.145575494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:42.147058 containerd[2012]: time="2025-12-12T17:27:42.146997122Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.013979042s" Dec 12 17:27:42.147205 containerd[2012]: time="2025-12-12T17:27:42.147054614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 12 17:27:42.150803 containerd[2012]: time="2025-12-12T17:27:42.150363698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 17:27:42.194380 containerd[2012]: time="2025-12-12T17:27:42.194323310Z" level=info msg="CreateContainer within sandbox \"1c823c57156398379747d0a0c0ffb28ea78646c5c332b0f7388d3bb11dcded4e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 17:27:42.212090 containerd[2012]: time="2025-12-12T17:27:42.211154678Z" level=info msg="Container c6c0461c8967e9005afb3ad75098d710b5b0bbacbca626184b7d40841df5a22b: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:27:42.231406 containerd[2012]: time="2025-12-12T17:27:42.231314822Z" level=info msg="CreateContainer within sandbox \"1c823c57156398379747d0a0c0ffb28ea78646c5c332b0f7388d3bb11dcded4e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c6c0461c8967e9005afb3ad75098d710b5b0bbacbca626184b7d40841df5a22b\"" Dec 12 17:27:42.233925 containerd[2012]: time="2025-12-12T17:27:42.232675058Z" level=info msg="StartContainer for \"c6c0461c8967e9005afb3ad75098d710b5b0bbacbca626184b7d40841df5a22b\"" Dec 12 17:27:42.237260 containerd[2012]: time="2025-12-12T17:27:42.237149139Z" level=info msg="connecting to shim c6c0461c8967e9005afb3ad75098d710b5b0bbacbca626184b7d40841df5a22b" address="unix:///run/containerd/s/0bb6239d85654ef57b816b55a352bb1c2b1e0d800ae22b8d913b66af6a9e9aae" protocol=ttrpc version=3 Dec 12 17:27:42.277209 systemd[1]: Started cri-containerd-c6c0461c8967e9005afb3ad75098d710b5b0bbacbca626184b7d40841df5a22b.scope - libcontainer container c6c0461c8967e9005afb3ad75098d710b5b0bbacbca626184b7d40841df5a22b. Dec 12 17:27:42.303000 audit: BPF prog-id=171 op=LOAD Dec 12 17:27:42.304000 audit: BPF prog-id=172 op=LOAD Dec 12 17:27:42.304000 audit[4159]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3923 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:42.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336633034363163383936376539303035616662336164373530393864 Dec 12 17:27:42.305000 audit: BPF prog-id=172 op=UNLOAD Dec 12 17:27:42.305000 audit[4159]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3923 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:42.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336633034363163383936376539303035616662336164373530393864 Dec 12 17:27:42.306000 audit: BPF prog-id=173 op=LOAD Dec 12 17:27:42.306000 audit[4159]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3923 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:42.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336633034363163383936376539303035616662336164373530393864 Dec 12 17:27:42.306000 audit: BPF prog-id=174 op=LOAD Dec 12 17:27:42.306000 audit[4159]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3923 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:42.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336633034363163383936376539303035616662336164373530393864 Dec 12 17:27:42.307000 audit: BPF prog-id=174 op=UNLOAD Dec 12 17:27:42.307000 audit[4159]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3923 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:42.307000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336633034363163383936376539303035616662336164373530393864 Dec 12 17:27:42.307000 audit: BPF prog-id=173 op=UNLOAD Dec 12 17:27:42.307000 audit[4159]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3923 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:42.307000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336633034363163383936376539303035616662336164373530393864 Dec 12 17:27:42.307000 audit: BPF prog-id=175 op=LOAD Dec 12 17:27:42.307000 audit[4159]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3923 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:42.307000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336633034363163383936376539303035616662336164373530393864 Dec 12 17:27:42.366348 containerd[2012]: time="2025-12-12T17:27:42.366210387Z" level=info msg="StartContainer for \"c6c0461c8967e9005afb3ad75098d710b5b0bbacbca626184b7d40841df5a22b\" returns successfully" Dec 12 17:27:42.818877 kubelet[3325]: I1212 17:27:42.818745 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6cfd576759-q5g9f" podStartSLOduration=2.276783611 podStartE2EDuration="5.818716445s" podCreationTimestamp="2025-12-12 17:27:37 +0000 UTC" firstStartedPulling="2025-12-12 17:27:38.607294704 +0000 UTC m=+33.629747340" lastFinishedPulling="2025-12-12 17:27:42.149227526 +0000 UTC m=+37.171680174" observedRunningTime="2025-12-12 17:27:42.770763269 +0000 UTC m=+37.793215917" watchObservedRunningTime="2025-12-12 17:27:42.818716445 +0000 UTC m=+37.841169081" Dec 12 17:27:42.853000 audit[4195]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=4195 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:42.853000 audit[4195]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc2eed960 a2=0 a3=1 items=0 ppid=3625 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:42.853000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:42.860000 audit[4195]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=4195 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:42.860000 audit[4195]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffc2eed960 a2=0 a3=1 items=0 ppid=3625 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:42.860000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:43.424112 kubelet[3325]: E1212 17:27:43.423841 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hb6pw" podUID="8797b6d6-7a5e-4865-91c2-2bd3d90f57cf" Dec 12 17:27:45.094707 containerd[2012]: time="2025-12-12T17:27:45.093343145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:45.095625 containerd[2012]: time="2025-12-12T17:27:45.095522381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 12 17:27:45.098600 containerd[2012]: time="2025-12-12T17:27:45.098502473Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:45.105883 containerd[2012]: time="2025-12-12T17:27:45.105050057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:45.106842 containerd[2012]: time="2025-12-12T17:27:45.106784045Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.956343475s" Dec 12 17:27:45.107061 containerd[2012]: time="2025-12-12T17:27:45.107029625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 12 17:27:45.117370 containerd[2012]: time="2025-12-12T17:27:45.117298793Z" level=info msg="CreateContainer within sandbox \"9e4da854595c868a55b9d490077f836f0b881df2208488604de6971b17a5d31d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 17:27:45.137897 containerd[2012]: time="2025-12-12T17:27:45.137339105Z" level=info msg="Container f758eb26d7b6b0e556f9044cafef6258ec0f7376998931c935ad2fbaef36d836: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:27:45.146260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2040891557.mount: Deactivated successfully. Dec 12 17:27:45.168984 containerd[2012]: time="2025-12-12T17:27:45.168084749Z" level=info msg="CreateContainer within sandbox \"9e4da854595c868a55b9d490077f836f0b881df2208488604de6971b17a5d31d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f758eb26d7b6b0e556f9044cafef6258ec0f7376998931c935ad2fbaef36d836\"" Dec 12 17:27:45.170463 containerd[2012]: time="2025-12-12T17:27:45.170411165Z" level=info msg="StartContainer for \"f758eb26d7b6b0e556f9044cafef6258ec0f7376998931c935ad2fbaef36d836\"" Dec 12 17:27:45.175415 containerd[2012]: time="2025-12-12T17:27:45.175241429Z" level=info msg="connecting to shim f758eb26d7b6b0e556f9044cafef6258ec0f7376998931c935ad2fbaef36d836" address="unix:///run/containerd/s/2569307da30bec7ecbd04fac32bd3c6f69db0aa0cf328678d7694fcf409a49b0" protocol=ttrpc version=3 Dec 12 17:27:45.224275 systemd[1]: Started cri-containerd-f758eb26d7b6b0e556f9044cafef6258ec0f7376998931c935ad2fbaef36d836.scope - libcontainer container f758eb26d7b6b0e556f9044cafef6258ec0f7376998931c935ad2fbaef36d836. Dec 12 17:27:45.307963 kernel: kauditd_printk_skb: 84 callbacks suppressed Dec 12 17:27:45.308151 kernel: audit: type=1334 audit(1765560465.303:582): prog-id=176 op=LOAD Dec 12 17:27:45.303000 audit: BPF prog-id=176 op=LOAD Dec 12 17:27:45.303000 audit[4204]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=4018 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:45.314838 kernel: audit: type=1300 audit(1765560465.303:582): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=4018 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:45.320990 kernel: audit: type=1327 audit(1765560465.303:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637353865623236643762366230653535366639303434636166656636 Dec 12 17:27:45.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637353865623236643762366230653535366639303434636166656636 Dec 12 17:27:45.306000 audit: BPF prog-id=177 op=LOAD Dec 12 17:27:45.323050 kernel: audit: type=1334 audit(1765560465.306:583): prog-id=177 op=LOAD Dec 12 17:27:45.306000 audit[4204]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=4018 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:45.330441 kernel: audit: type=1300 audit(1765560465.306:583): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=4018 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:45.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637353865623236643762366230653535366639303434636166656636 Dec 12 17:27:45.336797 kernel: audit: type=1327 audit(1765560465.306:583): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637353865623236643762366230653535366639303434636166656636 Dec 12 17:27:45.341517 kernel: audit: type=1334 audit(1765560465.306:584): prog-id=177 op=UNLOAD Dec 12 17:27:45.306000 audit: BPF prog-id=177 op=UNLOAD Dec 12 17:27:45.306000 audit[4204]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4018 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:45.348777 kernel: audit: type=1300 audit(1765560465.306:584): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4018 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:45.356114 kernel: audit: type=1327 audit(1765560465.306:584): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637353865623236643762366230653535366639303434636166656636 Dec 12 17:27:45.356267 kernel: audit: type=1334 audit(1765560465.306:585): prog-id=176 op=UNLOAD Dec 12 17:27:45.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637353865623236643762366230653535366639303434636166656636 Dec 12 17:27:45.306000 audit: BPF prog-id=176 op=UNLOAD Dec 12 17:27:45.306000 audit[4204]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4018 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:45.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637353865623236643762366230653535366639303434636166656636 Dec 12 17:27:45.306000 audit: BPF prog-id=178 op=LOAD Dec 12 17:27:45.306000 audit[4204]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=4018 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:45.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637353865623236643762366230653535366639303434636166656636 Dec 12 17:27:45.397282 containerd[2012]: time="2025-12-12T17:27:45.397234086Z" level=info msg="StartContainer for \"f758eb26d7b6b0e556f9044cafef6258ec0f7376998931c935ad2fbaef36d836\" returns successfully" Dec 12 17:27:45.425907 kubelet[3325]: E1212 17:27:45.425097 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hb6pw" podUID="8797b6d6-7a5e-4865-91c2-2bd3d90f57cf" Dec 12 17:27:46.485840 containerd[2012]: time="2025-12-12T17:27:46.485531084Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:27:46.492722 systemd[1]: cri-containerd-f758eb26d7b6b0e556f9044cafef6258ec0f7376998931c935ad2fbaef36d836.scope: Deactivated successfully. Dec 12 17:27:46.493705 systemd[1]: cri-containerd-f758eb26d7b6b0e556f9044cafef6258ec0f7376998931c935ad2fbaef36d836.scope: Consumed 932ms CPU time, 186.6M memory peak, 165.9M written to disk. Dec 12 17:27:46.497000 audit: BPF prog-id=178 op=UNLOAD Dec 12 17:27:46.502485 containerd[2012]: time="2025-12-12T17:27:46.502417220Z" level=info msg="received container exit event container_id:\"f758eb26d7b6b0e556f9044cafef6258ec0f7376998931c935ad2fbaef36d836\" id:\"f758eb26d7b6b0e556f9044cafef6258ec0f7376998931c935ad2fbaef36d836\" pid:4217 exited_at:{seconds:1765560466 nanos:502085888}" Dec 12 17:27:46.543973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f758eb26d7b6b0e556f9044cafef6258ec0f7376998931c935ad2fbaef36d836-rootfs.mount: Deactivated successfully. Dec 12 17:27:46.549064 kubelet[3325]: I1212 17:27:46.549014 3325 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 12 17:27:46.706126 systemd[1]: Created slice kubepods-burstable-pod3db1076d_3ef5_41a5_8a08_117779fb2cae.slice - libcontainer container kubepods-burstable-pod3db1076d_3ef5_41a5_8a08_117779fb2cae.slice. Dec 12 17:27:46.751939 systemd[1]: Created slice kubepods-besteffort-pod334ae9c1_5a14_4018_8c8f_986d294ed109.slice - libcontainer container kubepods-besteffort-pod334ae9c1_5a14_4018_8c8f_986d294ed109.slice. Dec 12 17:27:46.794598 systemd[1]: Created slice kubepods-burstable-podda236b25_6353_4685_ace8_c9064e3e4481.slice - libcontainer container kubepods-burstable-podda236b25_6353_4685_ace8_c9064e3e4481.slice. Dec 12 17:27:46.868286 kubelet[3325]: I1212 17:27:46.868223 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da236b25-6353-4685-ace8-c9064e3e4481-config-volume\") pod \"coredns-66bc5c9577-cqttw\" (UID: \"da236b25-6353-4685-ace8-c9064e3e4481\") " pod="kube-system/coredns-66bc5c9577-cqttw" Dec 12 17:27:46.868517 kubelet[3325]: I1212 17:27:46.868300 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/334ae9c1-5a14-4018-8c8f-986d294ed109-calico-apiserver-certs\") pod \"calico-apiserver-6b6bc6ffc4-bklww\" (UID: \"334ae9c1-5a14-4018-8c8f-986d294ed109\") " pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-bklww" Dec 12 17:27:46.869375 kubelet[3325]: I1212 17:27:46.869301 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g26j\" (UniqueName: \"kubernetes.io/projected/334ae9c1-5a14-4018-8c8f-986d294ed109-kube-api-access-8g26j\") pod \"calico-apiserver-6b6bc6ffc4-bklww\" (UID: \"334ae9c1-5a14-4018-8c8f-986d294ed109\") " pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-bklww" Dec 12 17:27:46.869520 kubelet[3325]: I1212 17:27:46.869397 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3db1076d-3ef5-41a5-8a08-117779fb2cae-config-volume\") pod \"coredns-66bc5c9577-5lq7k\" (UID: \"3db1076d-3ef5-41a5-8a08-117779fb2cae\") " pod="kube-system/coredns-66bc5c9577-5lq7k" Dec 12 17:27:46.869520 kubelet[3325]: I1212 17:27:46.869468 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbzpj\" (UniqueName: \"kubernetes.io/projected/da236b25-6353-4685-ace8-c9064e3e4481-kube-api-access-gbzpj\") pod \"coredns-66bc5c9577-cqttw\" (UID: \"da236b25-6353-4685-ace8-c9064e3e4481\") " pod="kube-system/coredns-66bc5c9577-cqttw" Dec 12 17:27:46.869649 kubelet[3325]: I1212 17:27:46.869539 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn5w8\" (UniqueName: \"kubernetes.io/projected/3db1076d-3ef5-41a5-8a08-117779fb2cae-kube-api-access-wn5w8\") pod \"coredns-66bc5c9577-5lq7k\" (UID: \"3db1076d-3ef5-41a5-8a08-117779fb2cae\") " pod="kube-system/coredns-66bc5c9577-5lq7k" Dec 12 17:27:46.889386 systemd[1]: Created slice kubepods-besteffort-podbaecc54d_00d4_4fc9_9061_9d5f893dca48.slice - libcontainer container kubepods-besteffort-podbaecc54d_00d4_4fc9_9061_9d5f893dca48.slice. Dec 12 17:27:46.958149 systemd[1]: Created slice kubepods-besteffort-pod87213e25_1478_4b01_ac6c_54452f7f57dd.slice - libcontainer container kubepods-besteffort-pod87213e25_1478_4b01_ac6c_54452f7f57dd.slice. Dec 12 17:27:46.970654 kubelet[3325]: I1212 17:27:46.970509 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkcqg\" (UniqueName: \"kubernetes.io/projected/baecc54d-00d4-4fc9-9061-9d5f893dca48-kube-api-access-qkcqg\") pod \"calico-kube-controllers-6b7498b7d9-pzhlv\" (UID: \"baecc54d-00d4-4fc9-9061-9d5f893dca48\") " pod="calico-system/calico-kube-controllers-6b7498b7d9-pzhlv" Dec 12 17:27:47.000373 kubelet[3325]: I1212 17:27:46.970804 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/baecc54d-00d4-4fc9-9061-9d5f893dca48-tigera-ca-bundle\") pod \"calico-kube-controllers-6b7498b7d9-pzhlv\" (UID: \"baecc54d-00d4-4fc9-9061-9d5f893dca48\") " pod="calico-system/calico-kube-controllers-6b7498b7d9-pzhlv" Dec 12 17:27:47.072673 kubelet[3325]: I1212 17:27:47.072225 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87213e25-1478-4b01-ac6c-54452f7f57dd-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-c927k\" (UID: \"87213e25-1478-4b01-ac6c-54452f7f57dd\") " pod="calico-system/goldmane-7c778bb748-c927k" Dec 12 17:27:47.086781 kubelet[3325]: I1212 17:27:47.076888 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87213e25-1478-4b01-ac6c-54452f7f57dd-config\") pod \"goldmane-7c778bb748-c927k\" (UID: \"87213e25-1478-4b01-ac6c-54452f7f57dd\") " pod="calico-system/goldmane-7c778bb748-c927k" Dec 12 17:27:47.086781 kubelet[3325]: I1212 17:27:47.076951 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/87213e25-1478-4b01-ac6c-54452f7f57dd-goldmane-key-pair\") pod \"goldmane-7c778bb748-c927k\" (UID: \"87213e25-1478-4b01-ac6c-54452f7f57dd\") " pod="calico-system/goldmane-7c778bb748-c927k" Dec 12 17:27:47.086781 kubelet[3325]: I1212 17:27:47.076989 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfz9g\" (UniqueName: \"kubernetes.io/projected/87213e25-1478-4b01-ac6c-54452f7f57dd-kube-api-access-vfz9g\") pod \"goldmane-7c778bb748-c927k\" (UID: \"87213e25-1478-4b01-ac6c-54452f7f57dd\") " pod="calico-system/goldmane-7c778bb748-c927k" Dec 12 17:27:47.079842 systemd[1]: Created slice kubepods-besteffort-pod11ece337_6017_4f67_9160_e5ad30878870.slice - libcontainer container kubepods-besteffort-pod11ece337_6017_4f67_9160_e5ad30878870.slice. Dec 12 17:27:47.112879 containerd[2012]: time="2025-12-12T17:27:47.112790683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5lq7k,Uid:3db1076d-3ef5-41a5-8a08-117779fb2cae,Namespace:kube-system,Attempt:0,}" Dec 12 17:27:47.131529 systemd[1]: Created slice kubepods-besteffort-poddf70d9f4_d51b_472c_b1f4_6f65f02c50aa.slice - libcontainer container kubepods-besteffort-poddf70d9f4_d51b_472c_b1f4_6f65f02c50aa.slice. Dec 12 17:27:47.177871 kubelet[3325]: I1212 17:27:47.177652 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11ece337-6017-4f67-9160-e5ad30878870-whisker-ca-bundle\") pod \"whisker-dfc6d67bd-7ds4m\" (UID: \"11ece337-6017-4f67-9160-e5ad30878870\") " pod="calico-system/whisker-dfc6d67bd-7ds4m" Dec 12 17:27:47.177871 kubelet[3325]: I1212 17:27:47.177791 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbcnv\" (UniqueName: \"kubernetes.io/projected/11ece337-6017-4f67-9160-e5ad30878870-kube-api-access-hbcnv\") pod \"whisker-dfc6d67bd-7ds4m\" (UID: \"11ece337-6017-4f67-9160-e5ad30878870\") " pod="calico-system/whisker-dfc6d67bd-7ds4m" Dec 12 17:27:47.179305 kubelet[3325]: I1212 17:27:47.177835 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/11ece337-6017-4f67-9160-e5ad30878870-whisker-backend-key-pair\") pod \"whisker-dfc6d67bd-7ds4m\" (UID: \"11ece337-6017-4f67-9160-e5ad30878870\") " pod="calico-system/whisker-dfc6d67bd-7ds4m" Dec 12 17:27:47.212329 containerd[2012]: time="2025-12-12T17:27:47.212279551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bc6ffc4-bklww,Uid:334ae9c1-5a14-4018-8c8f-986d294ed109,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:27:47.255948 systemd[1]: Created slice kubepods-besteffort-pod146b6197_b092_48f2_948f_08d710a51bd7.slice - libcontainer container kubepods-besteffort-pod146b6197_b092_48f2_948f_08d710a51bd7.slice. Dec 12 17:27:47.271613 containerd[2012]: time="2025-12-12T17:27:47.271556108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b7498b7d9-pzhlv,Uid:baecc54d-00d4-4fc9-9061-9d5f893dca48,Namespace:calico-system,Attempt:0,}" Dec 12 17:27:47.279544 kubelet[3325]: I1212 17:27:47.279480 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr5fx\" (UniqueName: \"kubernetes.io/projected/df70d9f4-d51b-472c-b1f4-6f65f02c50aa-kube-api-access-kr5fx\") pod \"calico-apiserver-6b6bc6ffc4-ln8dl\" (UID: \"df70d9f4-d51b-472c-b1f4-6f65f02c50aa\") " pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-ln8dl" Dec 12 17:27:47.279676 kubelet[3325]: I1212 17:27:47.279552 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/df70d9f4-d51b-472c-b1f4-6f65f02c50aa-calico-apiserver-certs\") pod \"calico-apiserver-6b6bc6ffc4-ln8dl\" (UID: \"df70d9f4-d51b-472c-b1f4-6f65f02c50aa\") " pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-ln8dl" Dec 12 17:27:47.327965 containerd[2012]: time="2025-12-12T17:27:47.327534788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cqttw,Uid:da236b25-6353-4685-ace8-c9064e3e4481,Namespace:kube-system,Attempt:0,}" Dec 12 17:27:47.380757 kubelet[3325]: I1212 17:27:47.380697 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/146b6197-b092-48f2-948f-08d710a51bd7-calico-apiserver-certs\") pod \"calico-apiserver-5fcd4b4759-jhpmk\" (UID: \"146b6197-b092-48f2-948f-08d710a51bd7\") " pod="calico-apiserver/calico-apiserver-5fcd4b4759-jhpmk" Dec 12 17:27:47.380979 kubelet[3325]: I1212 17:27:47.380797 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6k2f\" (UniqueName: \"kubernetes.io/projected/146b6197-b092-48f2-948f-08d710a51bd7-kube-api-access-j6k2f\") pod \"calico-apiserver-5fcd4b4759-jhpmk\" (UID: \"146b6197-b092-48f2-948f-08d710a51bd7\") " pod="calico-apiserver/calico-apiserver-5fcd4b4759-jhpmk" Dec 12 17:27:47.399916 containerd[2012]: time="2025-12-12T17:27:47.399438944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-c927k,Uid:87213e25-1478-4b01-ac6c-54452f7f57dd,Namespace:calico-system,Attempt:0,}" Dec 12 17:27:47.437606 systemd[1]: Created slice kubepods-besteffort-pod8797b6d6_7a5e_4865_91c2_2bd3d90f57cf.slice - libcontainer container kubepods-besteffort-pod8797b6d6_7a5e_4865_91c2_2bd3d90f57cf.slice. Dec 12 17:27:47.453879 containerd[2012]: time="2025-12-12T17:27:47.453758444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dfc6d67bd-7ds4m,Uid:11ece337-6017-4f67-9160-e5ad30878870,Namespace:calico-system,Attempt:0,}" Dec 12 17:27:47.513513 containerd[2012]: time="2025-12-12T17:27:47.513414441Z" level=error msg="Failed to destroy network for sandbox \"2fd4fc55cd8ad51dbcd921ea7e3a9e813399121179c33e5ccc7f89ed4ea76804\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:47.518777 containerd[2012]: time="2025-12-12T17:27:47.518698713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bc6ffc4-ln8dl,Uid:df70d9f4-d51b-472c-b1f4-6f65f02c50aa,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:27:47.564804 containerd[2012]: time="2025-12-12T17:27:47.564739545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hb6pw,Uid:8797b6d6-7a5e-4865-91c2-2bd3d90f57cf,Namespace:calico-system,Attempt:0,}" Dec 12 17:27:47.579721 containerd[2012]: time="2025-12-12T17:27:47.579559221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcd4b4759-jhpmk,Uid:146b6197-b092-48f2-948f-08d710a51bd7,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:27:47.647252 containerd[2012]: time="2025-12-12T17:27:47.647179005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5lq7k,Uid:3db1076d-3ef5-41a5-8a08-117779fb2cae,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd4fc55cd8ad51dbcd921ea7e3a9e813399121179c33e5ccc7f89ed4ea76804\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:47.649483 kubelet[3325]: E1212 17:27:47.649421 3325 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd4fc55cd8ad51dbcd921ea7e3a9e813399121179c33e5ccc7f89ed4ea76804\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:47.651261 kubelet[3325]: E1212 17:27:47.650134 3325 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd4fc55cd8ad51dbcd921ea7e3a9e813399121179c33e5ccc7f89ed4ea76804\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5lq7k" Dec 12 17:27:47.651261 kubelet[3325]: E1212 17:27:47.650182 3325 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd4fc55cd8ad51dbcd921ea7e3a9e813399121179c33e5ccc7f89ed4ea76804\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5lq7k" Dec 12 17:27:47.651261 kubelet[3325]: E1212 17:27:47.650275 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-5lq7k_kube-system(3db1076d-3ef5-41a5-8a08-117779fb2cae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-5lq7k_kube-system(3db1076d-3ef5-41a5-8a08-117779fb2cae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fd4fc55cd8ad51dbcd921ea7e3a9e813399121179c33e5ccc7f89ed4ea76804\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-5lq7k" podUID="3db1076d-3ef5-41a5-8a08-117779fb2cae" Dec 12 17:27:47.792715 containerd[2012]: time="2025-12-12T17:27:47.790425250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 17:27:48.032749 containerd[2012]: time="2025-12-12T17:27:48.032192455Z" level=error msg="Failed to destroy network for sandbox \"9b60690370c2522e1ca25a44ccfe9f7345f3e810b27536a656615a2fa63b4c43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:48.042573 containerd[2012]: time="2025-12-12T17:27:48.042473191Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b7498b7d9-pzhlv,Uid:baecc54d-00d4-4fc9-9061-9d5f893dca48,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b60690370c2522e1ca25a44ccfe9f7345f3e810b27536a656615a2fa63b4c43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:48.044572 kubelet[3325]: E1212 17:27:48.044519 3325 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b60690370c2522e1ca25a44ccfe9f7345f3e810b27536a656615a2fa63b4c43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:48.044970 kubelet[3325]: E1212 17:27:48.044931 3325 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b60690370c2522e1ca25a44ccfe9f7345f3e810b27536a656615a2fa63b4c43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b7498b7d9-pzhlv" Dec 12 17:27:48.045232 kubelet[3325]: E1212 17:27:48.045180 3325 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b60690370c2522e1ca25a44ccfe9f7345f3e810b27536a656615a2fa63b4c43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b7498b7d9-pzhlv" Dec 12 17:27:48.045578 kubelet[3325]: E1212 17:27:48.045530 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b7498b7d9-pzhlv_calico-system(baecc54d-00d4-4fc9-9061-9d5f893dca48)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b7498b7d9-pzhlv_calico-system(baecc54d-00d4-4fc9-9061-9d5f893dca48)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b60690370c2522e1ca25a44ccfe9f7345f3e810b27536a656615a2fa63b4c43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b7498b7d9-pzhlv" podUID="baecc54d-00d4-4fc9-9061-9d5f893dca48" Dec 12 17:27:48.083543 containerd[2012]: time="2025-12-12T17:27:48.083439500Z" level=error msg="Failed to destroy network for sandbox \"d6809bc05dd86e9d2bad385ec67533390cc5f407222fd4b3ad0ad99c49e3ba49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:48.093754 containerd[2012]: time="2025-12-12T17:27:48.093454664Z" level=error msg="Failed to destroy network for sandbox \"78a856a0a168f61b93c9ce4d5aa52b6e047711ae3e69ec299e6809437c90bc2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:48.094128 containerd[2012]: time="2025-12-12T17:27:48.093976076Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bc6ffc4-bklww,Uid:334ae9c1-5a14-4018-8c8f-986d294ed109,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6809bc05dd86e9d2bad385ec67533390cc5f407222fd4b3ad0ad99c49e3ba49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:48.095836 kubelet[3325]: E1212 17:27:48.094381 3325 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6809bc05dd86e9d2bad385ec67533390cc5f407222fd4b3ad0ad99c49e3ba49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:48.095836 kubelet[3325]: E1212 17:27:48.094461 3325 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6809bc05dd86e9d2bad385ec67533390cc5f407222fd4b3ad0ad99c49e3ba49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-bklww" Dec 12 17:27:48.095836 kubelet[3325]: E1212 17:27:48.094964 3325 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6809bc05dd86e9d2bad385ec67533390cc5f407222fd4b3ad0ad99c49e3ba49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-bklww" Dec 12 17:27:48.096319 kubelet[3325]: E1212 17:27:48.095623 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b6bc6ffc4-bklww_calico-apiserver(334ae9c1-5a14-4018-8c8f-986d294ed109)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b6bc6ffc4-bklww_calico-apiserver(334ae9c1-5a14-4018-8c8f-986d294ed109)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6809bc05dd86e9d2bad385ec67533390cc5f407222fd4b3ad0ad99c49e3ba49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-bklww" podUID="334ae9c1-5a14-4018-8c8f-986d294ed109" Dec 12 17:27:48.108932 containerd[2012]: time="2025-12-12T17:27:48.108303140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cqttw,Uid:da236b25-6353-4685-ace8-c9064e3e4481,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"78a856a0a168f61b93c9ce4d5aa52b6e047711ae3e69ec299e6809437c90bc2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:48.109147 kubelet[3325]: E1212 17:27:48.108731 3325 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78a856a0a168f61b93c9ce4d5aa52b6e047711ae3e69ec299e6809437c90bc2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:48.109147 kubelet[3325]: E1212 17:27:48.108806 3325 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78a856a0a168f61b93c9ce4d5aa52b6e047711ae3e69ec299e6809437c90bc2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-cqttw" Dec 12 17:27:48.109147 kubelet[3325]: E1212 17:27:48.108863 3325 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78a856a0a168f61b93c9ce4d5aa52b6e047711ae3e69ec299e6809437c90bc2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-cqttw" Dec 12 17:27:48.110723 kubelet[3325]: E1212 17:27:48.110607 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-cqttw_kube-system(da236b25-6353-4685-ace8-c9064e3e4481)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-cqttw_kube-system(da236b25-6353-4685-ace8-c9064e3e4481)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78a856a0a168f61b93c9ce4d5aa52b6e047711ae3e69ec299e6809437c90bc2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-cqttw" podUID="da236b25-6353-4685-ace8-c9064e3e4481" Dec 12 17:27:48.155357 containerd[2012]: time="2025-12-12T17:27:48.155278688Z" level=error msg="Failed to destroy network for sandbox \"d50819cc7dccb056bbb11e8ea583f4c33e5e0b3041c5c6efd1cce86013d7503b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:48.163504 containerd[2012]: time="2025-12-12T17:27:48.163411232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dfc6d67bd-7ds4m,Uid:11ece337-6017-4f67-9160-e5ad30878870,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d50819cc7dccb056bbb11e8ea583f4c33e5e0b3041c5c6efd1cce86013d7503b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:48.164724 containerd[2012]: time="2025-12-12T17:27:48.164361836Z" level=error msg="Failed to destroy network for sandbox \"8a59156f352eff505dc8740070b4365f752de04834a493b63009ffac0750dfe6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:48.165123 kubelet[3325]: E1212 17:27:48.165059 3325 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d50819cc7dccb056bbb11e8ea583f4c33e5e0b3041c5c6efd1cce86013d7503b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:48.165278 kubelet[3325]: E1212 17:27:48.165144 3325 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d50819cc7dccb056bbb11e8ea583f4c33e5e0b3041c5c6efd1cce86013d7503b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dfc6d67bd-7ds4m" Dec 12 17:27:48.165278 kubelet[3325]: E1212 17:27:48.165186 3325 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d50819cc7dccb056bbb11e8ea583f4c33e5e0b3041c5c6efd1cce86013d7503b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dfc6d67bd-7ds4m" Dec 12 17:27:48.165437 kubelet[3325]: E1212 17:27:48.165278 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-dfc6d67bd-7ds4m_calico-system(11ece337-6017-4f67-9160-e5ad30878870)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-dfc6d67bd-7ds4m_calico-system(11ece337-6017-4f67-9160-e5ad30878870)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d50819cc7dccb056bbb11e8ea583f4c33e5e0b3041c5c6efd1cce86013d7503b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-dfc6d67bd-7ds4m" podUID="11ece337-6017-4f67-9160-e5ad30878870" Dec 12 17:27:48.175348 containerd[2012]: time="2025-12-12T17:27:48.175286840Z" level=error msg="Failed to destroy network for sandbox \"4fde085f09a950e536e74d2870438eea88f26e930cc1cf7db8ff3e6365215fb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:48.179960 containerd[2012]: time="2025-12-12T17:27:48.179703932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bc6ffc4-ln8dl,Uid:df70d9f4-d51b-472c-b1f4-6f65f02c50aa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a59156f352eff505dc8740070b4365f752de04834a493b63009ffac0750dfe6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:48.180985 kubelet[3325]: E1212 17:27:48.180385 3325 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a59156f352eff505dc8740070b4365f752de04834a493b63009ffac0750dfe6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:48.180985 kubelet[3325]: E1212 17:27:48.180510 3325 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a59156f352eff505dc8740070b4365f752de04834a493b63009ffac0750dfe6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-ln8dl" Dec 12 17:27:48.180985 kubelet[3325]: E1212 17:27:48.180588 3325 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a59156f352eff505dc8740070b4365f752de04834a493b63009ffac0750dfe6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-ln8dl" Dec 12 17:27:48.181243 kubelet[3325]: E1212 17:27:48.180679 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b6bc6ffc4-ln8dl_calico-apiserver(df70d9f4-d51b-472c-b1f4-6f65f02c50aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b6bc6ffc4-ln8dl_calico-apiserver(df70d9f4-d51b-472c-b1f4-6f65f02c50aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a59156f352eff505dc8740070b4365f752de04834a493b63009ffac0750dfe6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-ln8dl" podUID="df70d9f4-d51b-472c-b1f4-6f65f02c50aa" Dec 12 17:27:48.185427 containerd[2012]: time="2025-12-12T17:27:48.185256020Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcd4b4759-jhpmk,Uid:146b6197-b092-48f2-948f-08d710a51bd7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fde085f09a950e536e74d2870438eea88f26e930cc1cf7db8ff3e6365215fb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:48.187354 kubelet[3325]: E1212 17:27:48.186702 3325 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fde085f09a950e536e74d2870438eea88f26e930cc1cf7db8ff3e6365215fb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:48.187509 kubelet[3325]: E1212 17:27:48.187406 3325 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fde085f09a950e536e74d2870438eea88f26e930cc1cf7db8ff3e6365215fb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fcd4b4759-jhpmk" Dec 12 17:27:48.187509 kubelet[3325]: E1212 17:27:48.187478 3325 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fde085f09a950e536e74d2870438eea88f26e930cc1cf7db8ff3e6365215fb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fcd4b4759-jhpmk" Dec 12 17:27:48.187827 kubelet[3325]: E1212 17:27:48.187628 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fcd4b4759-jhpmk_calico-apiserver(146b6197-b092-48f2-948f-08d710a51bd7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fcd4b4759-jhpmk_calico-apiserver(146b6197-b092-48f2-948f-08d710a51bd7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4fde085f09a950e536e74d2870438eea88f26e930cc1cf7db8ff3e6365215fb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fcd4b4759-jhpmk" podUID="146b6197-b092-48f2-948f-08d710a51bd7" Dec 12 17:27:48.191042 containerd[2012]: time="2025-12-12T17:27:48.190818956Z" level=error msg="Failed to destroy network for sandbox \"241e935fc8f6d3d5fa2b577fc670762b59b187169bd944ecf8ee25a1a6aa2757\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:48.196793 containerd[2012]: time="2025-12-12T17:27:48.196726112Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-c927k,Uid:87213e25-1478-4b01-ac6c-54452f7f57dd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"241e935fc8f6d3d5fa2b577fc670762b59b187169bd944ecf8ee25a1a6aa2757\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:48.197650 kubelet[3325]: E1212 17:27:48.197415 3325 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"241e935fc8f6d3d5fa2b577fc670762b59b187169bd944ecf8ee25a1a6aa2757\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:48.197650 kubelet[3325]: E1212 17:27:48.197531 3325 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"241e935fc8f6d3d5fa2b577fc670762b59b187169bd944ecf8ee25a1a6aa2757\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-c927k" Dec 12 17:27:48.197650 kubelet[3325]: E1212 17:27:48.197568 3325 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"241e935fc8f6d3d5fa2b577fc670762b59b187169bd944ecf8ee25a1a6aa2757\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-c927k" Dec 12 17:27:48.198192 kubelet[3325]: E1212 17:27:48.198025 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-c927k_calico-system(87213e25-1478-4b01-ac6c-54452f7f57dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-c927k_calico-system(87213e25-1478-4b01-ac6c-54452f7f57dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"241e935fc8f6d3d5fa2b577fc670762b59b187169bd944ecf8ee25a1a6aa2757\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-c927k" podUID="87213e25-1478-4b01-ac6c-54452f7f57dd" Dec 12 17:27:48.202130 containerd[2012]: time="2025-12-12T17:27:48.202018496Z" level=error msg="Failed to destroy network for sandbox \"e24f4da0300ba7eb6b65d829e862d2839567fef369580353e9b3b296385a243b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:48.207651 containerd[2012]: time="2025-12-12T17:27:48.207564680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hb6pw,Uid:8797b6d6-7a5e-4865-91c2-2bd3d90f57cf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e24f4da0300ba7eb6b65d829e862d2839567fef369580353e9b3b296385a243b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:48.208300 kubelet[3325]: E1212 17:27:48.208233 3325 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e24f4da0300ba7eb6b65d829e862d2839567fef369580353e9b3b296385a243b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:48.208430 kubelet[3325]: E1212 17:27:48.208364 3325 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e24f4da0300ba7eb6b65d829e862d2839567fef369580353e9b3b296385a243b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hb6pw" Dec 12 17:27:48.208510 kubelet[3325]: E1212 17:27:48.208406 3325 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e24f4da0300ba7eb6b65d829e862d2839567fef369580353e9b3b296385a243b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hb6pw" Dec 12 17:27:48.208576 kubelet[3325]: E1212 17:27:48.208544 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hb6pw_calico-system(8797b6d6-7a5e-4865-91c2-2bd3d90f57cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hb6pw_calico-system(8797b6d6-7a5e-4865-91c2-2bd3d90f57cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e24f4da0300ba7eb6b65d829e862d2839567fef369580353e9b3b296385a243b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hb6pw" podUID="8797b6d6-7a5e-4865-91c2-2bd3d90f57cf" Dec 12 17:27:48.542450 systemd[1]: run-netns-cni\x2d09b743be\x2d542b\x2d0603\x2d85cf\x2d65c8639e4128.mount: Deactivated successfully. Dec 12 17:27:48.542725 systemd[1]: run-netns-cni\x2d3e18d9a6\x2d3714\x2d9e85\x2d1354\x2d517d5a04bc2e.mount: Deactivated successfully. Dec 12 17:27:48.543448 systemd[1]: run-netns-cni\x2db4904d11\x2d20d6\x2db535\x2d2d86\x2d8a05a64ffb0a.mount: Deactivated successfully. Dec 12 17:27:48.543603 systemd[1]: run-netns-cni\x2de75f0fb4\x2dc805\x2d0244\x2d2a9f\x2d337a20b3f980.mount: Deactivated successfully. Dec 12 17:27:48.543723 systemd[1]: run-netns-cni\x2d1fd30944\x2d199e\x2da252\x2da686\x2d895487fd4e20.mount: Deactivated successfully. Dec 12 17:27:54.068548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2211960165.mount: Deactivated successfully. Dec 12 17:27:54.136024 containerd[2012]: time="2025-12-12T17:27:54.135814430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:54.140765 containerd[2012]: time="2025-12-12T17:27:54.140626454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 12 17:27:54.192886 containerd[2012]: time="2025-12-12T17:27:54.192395570Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:54.199342 containerd[2012]: time="2025-12-12T17:27:54.199286342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:54.200408 containerd[2012]: time="2025-12-12T17:27:54.200348882Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.409418912s" Dec 12 17:27:54.200408 containerd[2012]: time="2025-12-12T17:27:54.200404130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 12 17:27:54.241105 containerd[2012]: time="2025-12-12T17:27:54.241047482Z" level=info msg="CreateContainer within sandbox \"9e4da854595c868a55b9d490077f836f0b881df2208488604de6971b17a5d31d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 17:27:54.264084 containerd[2012]: time="2025-12-12T17:27:54.261091802Z" level=info msg="Container 2ac273ee25b1f126a492652121b829d31d4c92f301926bf047135074d4a70c93: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:27:54.285782 containerd[2012]: time="2025-12-12T17:27:54.285730154Z" level=info msg="CreateContainer within sandbox \"9e4da854595c868a55b9d490077f836f0b881df2208488604de6971b17a5d31d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2ac273ee25b1f126a492652121b829d31d4c92f301926bf047135074d4a70c93\"" Dec 12 17:27:54.286781 containerd[2012]: time="2025-12-12T17:27:54.286730330Z" level=info msg="StartContainer for \"2ac273ee25b1f126a492652121b829d31d4c92f301926bf047135074d4a70c93\"" Dec 12 17:27:54.290444 containerd[2012]: time="2025-12-12T17:27:54.290358566Z" level=info msg="connecting to shim 2ac273ee25b1f126a492652121b829d31d4c92f301926bf047135074d4a70c93" address="unix:///run/containerd/s/2569307da30bec7ecbd04fac32bd3c6f69db0aa0cf328678d7694fcf409a49b0" protocol=ttrpc version=3 Dec 12 17:27:54.328401 systemd[1]: Started cri-containerd-2ac273ee25b1f126a492652121b829d31d4c92f301926bf047135074d4a70c93.scope - libcontainer container 2ac273ee25b1f126a492652121b829d31d4c92f301926bf047135074d4a70c93. Dec 12 17:27:54.422000 audit: BPF prog-id=179 op=LOAD Dec 12 17:27:54.424744 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 12 17:27:54.424828 kernel: audit: type=1334 audit(1765560474.422:588): prog-id=179 op=LOAD Dec 12 17:27:54.422000 audit[4509]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4018 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:54.432837 kernel: audit: type=1300 audit(1765560474.422:588): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4018 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:54.433508 kernel: audit: type=1327 audit(1765560474.422:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261633237336565323562316631323661343932363532313231623832 Dec 12 17:27:54.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261633237336565323562316631323661343932363532313231623832 Dec 12 17:27:54.426000 audit: BPF prog-id=180 op=LOAD Dec 12 17:27:54.441051 kernel: audit: type=1334 audit(1765560474.426:589): prog-id=180 op=LOAD Dec 12 17:27:54.441159 kernel: audit: type=1300 audit(1765560474.426:589): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4018 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:54.426000 audit[4509]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4018 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:54.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261633237336565323562316631323661343932363532313231623832 Dec 12 17:27:54.452796 kernel: audit: type=1327 audit(1765560474.426:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261633237336565323562316631323661343932363532313231623832 Dec 12 17:27:54.426000 audit: BPF prog-id=180 op=UNLOAD Dec 12 17:27:54.454887 kernel: audit: type=1334 audit(1765560474.426:590): prog-id=180 op=UNLOAD Dec 12 17:27:54.426000 audit[4509]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4018 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:54.460880 kernel: audit: type=1300 audit(1765560474.426:590): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4018 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:54.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261633237336565323562316631323661343932363532313231623832 Dec 12 17:27:54.468331 kernel: audit: type=1327 audit(1765560474.426:590): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261633237336565323562316631323661343932363532313231623832 Dec 12 17:27:54.468469 kernel: audit: type=1334 audit(1765560474.426:591): prog-id=179 op=UNLOAD Dec 12 17:27:54.426000 audit: BPF prog-id=179 op=UNLOAD Dec 12 17:27:54.426000 audit[4509]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4018 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:54.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261633237336565323562316631323661343932363532313231623832 Dec 12 17:27:54.426000 audit: BPF prog-id=181 op=LOAD Dec 12 17:27:54.426000 audit[4509]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4018 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:54.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261633237336565323562316631323661343932363532313231623832 Dec 12 17:27:54.509816 containerd[2012]: time="2025-12-12T17:27:54.509676387Z" level=info msg="StartContainer for \"2ac273ee25b1f126a492652121b829d31d4c92f301926bf047135074d4a70c93\" returns successfully" Dec 12 17:27:54.858881 kubelet[3325]: I1212 17:27:54.857521 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mn7dh" podStartSLOduration=2.235958507 podStartE2EDuration="17.857496005s" podCreationTimestamp="2025-12-12 17:27:37 +0000 UTC" firstStartedPulling="2025-12-12 17:27:38.5810517 +0000 UTC m=+33.603504336" lastFinishedPulling="2025-12-12 17:27:54.202589198 +0000 UTC m=+49.225041834" observedRunningTime="2025-12-12 17:27:54.856369817 +0000 UTC m=+49.878822477" watchObservedRunningTime="2025-12-12 17:27:54.857496005 +0000 UTC m=+49.879948629" Dec 12 17:27:54.953143 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 17:27:54.953277 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 17:27:55.350309 kubelet[3325]: I1212 17:27:55.350250 3325 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/11ece337-6017-4f67-9160-e5ad30878870-whisker-backend-key-pair\") pod \"11ece337-6017-4f67-9160-e5ad30878870\" (UID: \"11ece337-6017-4f67-9160-e5ad30878870\") " Dec 12 17:27:55.351795 kubelet[3325]: I1212 17:27:55.351531 3325 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11ece337-6017-4f67-9160-e5ad30878870-whisker-ca-bundle\") pod \"11ece337-6017-4f67-9160-e5ad30878870\" (UID: \"11ece337-6017-4f67-9160-e5ad30878870\") " Dec 12 17:27:55.351795 kubelet[3325]: I1212 17:27:55.351602 3325 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbcnv\" (UniqueName: \"kubernetes.io/projected/11ece337-6017-4f67-9160-e5ad30878870-kube-api-access-hbcnv\") pod \"11ece337-6017-4f67-9160-e5ad30878870\" (UID: \"11ece337-6017-4f67-9160-e5ad30878870\") " Dec 12 17:27:55.356588 kubelet[3325]: I1212 17:27:55.356492 3325 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11ece337-6017-4f67-9160-e5ad30878870-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "11ece337-6017-4f67-9160-e5ad30878870" (UID: "11ece337-6017-4f67-9160-e5ad30878870"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:27:55.362804 systemd[1]: var-lib-kubelet-pods-11ece337\x2d6017\x2d4f67\x2d9160\x2de5ad30878870-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 17:27:55.366005 kubelet[3325]: I1212 17:27:55.365423 3325 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11ece337-6017-4f67-9160-e5ad30878870-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "11ece337-6017-4f67-9160-e5ad30878870" (UID: "11ece337-6017-4f67-9160-e5ad30878870"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 17:27:55.373598 systemd[1]: var-lib-kubelet-pods-11ece337\x2d6017\x2d4f67\x2d9160\x2de5ad30878870-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhbcnv.mount: Deactivated successfully. Dec 12 17:27:55.374819 kubelet[3325]: I1212 17:27:55.374031 3325 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11ece337-6017-4f67-9160-e5ad30878870-kube-api-access-hbcnv" (OuterVolumeSpecName: "kube-api-access-hbcnv") pod "11ece337-6017-4f67-9160-e5ad30878870" (UID: "11ece337-6017-4f67-9160-e5ad30878870"). InnerVolumeSpecName "kube-api-access-hbcnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:27:55.453229 kubelet[3325]: I1212 17:27:55.452090 3325 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hbcnv\" (UniqueName: \"kubernetes.io/projected/11ece337-6017-4f67-9160-e5ad30878870-kube-api-access-hbcnv\") on node \"ip-172-31-16-55\" DevicePath \"\"" Dec 12 17:27:55.453229 kubelet[3325]: I1212 17:27:55.452150 3325 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/11ece337-6017-4f67-9160-e5ad30878870-whisker-backend-key-pair\") on node \"ip-172-31-16-55\" DevicePath \"\"" Dec 12 17:27:55.453229 kubelet[3325]: I1212 17:27:55.452175 3325 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11ece337-6017-4f67-9160-e5ad30878870-whisker-ca-bundle\") on node \"ip-172-31-16-55\" DevicePath \"\"" Dec 12 17:27:55.462933 systemd[1]: Removed slice kubepods-besteffort-pod11ece337_6017_4f67_9160_e5ad30878870.slice - libcontainer container kubepods-besteffort-pod11ece337_6017_4f67_9160_e5ad30878870.slice. Dec 12 17:27:55.985302 systemd[1]: Created slice kubepods-besteffort-pod3ec49c85_5274_4ed7_b914_fc08a271b46e.slice - libcontainer container kubepods-besteffort-pod3ec49c85_5274_4ed7_b914_fc08a271b46e.slice. Dec 12 17:27:56.058384 kubelet[3325]: I1212 17:27:56.058324 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3ec49c85-5274-4ed7-b914-fc08a271b46e-whisker-backend-key-pair\") pod \"whisker-cbdd468db-lnm8s\" (UID: \"3ec49c85-5274-4ed7-b914-fc08a271b46e\") " pod="calico-system/whisker-cbdd468db-lnm8s" Dec 12 17:27:56.059056 kubelet[3325]: I1212 17:27:56.058427 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ec49c85-5274-4ed7-b914-fc08a271b46e-whisker-ca-bundle\") pod \"whisker-cbdd468db-lnm8s\" (UID: \"3ec49c85-5274-4ed7-b914-fc08a271b46e\") " pod="calico-system/whisker-cbdd468db-lnm8s" Dec 12 17:27:56.059056 kubelet[3325]: I1212 17:27:56.058472 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp96p\" (UniqueName: \"kubernetes.io/projected/3ec49c85-5274-4ed7-b914-fc08a271b46e-kube-api-access-rp96p\") pod \"whisker-cbdd468db-lnm8s\" (UID: \"3ec49c85-5274-4ed7-b914-fc08a271b46e\") " pod="calico-system/whisker-cbdd468db-lnm8s" Dec 12 17:27:56.300182 containerd[2012]: time="2025-12-12T17:27:56.300059884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cbdd468db-lnm8s,Uid:3ec49c85-5274-4ed7-b914-fc08a271b46e,Namespace:calico-system,Attempt:0,}" Dec 12 17:27:56.597351 (udev-worker)[4546]: Network interface NamePolicy= disabled on kernel command line. Dec 12 17:27:56.600650 systemd-networkd[1585]: cali8baae981d08: Link UP Dec 12 17:27:56.603533 systemd-networkd[1585]: cali8baae981d08: Gained carrier Dec 12 17:27:56.644794 containerd[2012]: 2025-12-12 17:27:56.346 [INFO][4626] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 17:27:56.644794 containerd[2012]: 2025-12-12 17:27:56.431 [INFO][4626] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--55-k8s-whisker--cbdd468db--lnm8s-eth0 whisker-cbdd468db- calico-system 3ec49c85-5274-4ed7-b914-fc08a271b46e 956 0 2025-12-12 17:27:55 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:cbdd468db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-16-55 whisker-cbdd468db-lnm8s eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8baae981d08 [] [] }} ContainerID="80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b" Namespace="calico-system" Pod="whisker-cbdd468db-lnm8s" WorkloadEndpoint="ip--172--31--16--55-k8s-whisker--cbdd468db--lnm8s-" Dec 12 17:27:56.644794 containerd[2012]: 2025-12-12 17:27:56.431 [INFO][4626] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b" Namespace="calico-system" Pod="whisker-cbdd468db-lnm8s" WorkloadEndpoint="ip--172--31--16--55-k8s-whisker--cbdd468db--lnm8s-eth0" Dec 12 17:27:56.644794 containerd[2012]: 2025-12-12 17:27:56.515 [INFO][4637] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b" HandleID="k8s-pod-network.80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b" Workload="ip--172--31--16--55-k8s-whisker--cbdd468db--lnm8s-eth0" Dec 12 17:27:56.645188 containerd[2012]: 2025-12-12 17:27:56.515 [INFO][4637] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b" HandleID="k8s-pod-network.80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b" Workload="ip--172--31--16--55-k8s-whisker--cbdd468db--lnm8s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000283f20), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-55", "pod":"whisker-cbdd468db-lnm8s", "timestamp":"2025-12-12 17:27:56.515001605 +0000 UTC"}, Hostname:"ip-172-31-16-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:27:56.645188 containerd[2012]: 2025-12-12 17:27:56.515 [INFO][4637] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:27:56.645188 containerd[2012]: 2025-12-12 17:27:56.515 [INFO][4637] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:27:56.645188 containerd[2012]: 2025-12-12 17:27:56.515 [INFO][4637] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-55' Dec 12 17:27:56.645188 containerd[2012]: 2025-12-12 17:27:56.531 [INFO][4637] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b" host="ip-172-31-16-55" Dec 12 17:27:56.645188 containerd[2012]: 2025-12-12 17:27:56.542 [INFO][4637] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-55" Dec 12 17:27:56.645188 containerd[2012]: 2025-12-12 17:27:56.549 [INFO][4637] ipam/ipam.go 511: Trying affinity for 192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:27:56.645188 containerd[2012]: 2025-12-12 17:27:56.553 [INFO][4637] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:27:56.645188 containerd[2012]: 2025-12-12 17:27:56.557 [INFO][4637] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:27:56.645188 containerd[2012]: 2025-12-12 17:27:56.558 [INFO][4637] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.110.64/26 handle="k8s-pod-network.80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b" host="ip-172-31-16-55" Dec 12 17:27:56.646437 containerd[2012]: 2025-12-12 17:27:56.560 [INFO][4637] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b Dec 12 17:27:56.646437 containerd[2012]: 2025-12-12 17:27:56.568 [INFO][4637] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.110.64/26 handle="k8s-pod-network.80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b" host="ip-172-31-16-55" Dec 12 17:27:56.646437 containerd[2012]: 2025-12-12 17:27:56.579 [INFO][4637] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.110.65/26] block=192.168.110.64/26 handle="k8s-pod-network.80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b" host="ip-172-31-16-55" Dec 12 17:27:56.646437 containerd[2012]: 2025-12-12 17:27:56.579 [INFO][4637] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.65/26] handle="k8s-pod-network.80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b" host="ip-172-31-16-55" Dec 12 17:27:56.646437 containerd[2012]: 2025-12-12 17:27:56.580 [INFO][4637] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:27:56.646437 containerd[2012]: 2025-12-12 17:27:56.580 [INFO][4637] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.110.65/26] IPv6=[] ContainerID="80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b" HandleID="k8s-pod-network.80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b" Workload="ip--172--31--16--55-k8s-whisker--cbdd468db--lnm8s-eth0" Dec 12 17:27:56.647133 containerd[2012]: 2025-12-12 17:27:56.587 [INFO][4626] cni-plugin/k8s.go 418: Populated endpoint ContainerID="80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b" Namespace="calico-system" Pod="whisker-cbdd468db-lnm8s" WorkloadEndpoint="ip--172--31--16--55-k8s-whisker--cbdd468db--lnm8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--55-k8s-whisker--cbdd468db--lnm8s-eth0", GenerateName:"whisker-cbdd468db-", Namespace:"calico-system", SelfLink:"", UID:"3ec49c85-5274-4ed7-b914-fc08a271b46e", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cbdd468db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-55", ContainerID:"", Pod:"whisker-cbdd468db-lnm8s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.110.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8baae981d08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:56.647133 containerd[2012]: 2025-12-12 17:27:56.587 [INFO][4626] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.65/32] ContainerID="80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b" Namespace="calico-system" Pod="whisker-cbdd468db-lnm8s" WorkloadEndpoint="ip--172--31--16--55-k8s-whisker--cbdd468db--lnm8s-eth0" Dec 12 17:27:56.647581 containerd[2012]: 2025-12-12 17:27:56.587 [INFO][4626] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8baae981d08 ContainerID="80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b" Namespace="calico-system" Pod="whisker-cbdd468db-lnm8s" WorkloadEndpoint="ip--172--31--16--55-k8s-whisker--cbdd468db--lnm8s-eth0" Dec 12 17:27:56.647581 containerd[2012]: 2025-12-12 17:27:56.604 [INFO][4626] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b" Namespace="calico-system" Pod="whisker-cbdd468db-lnm8s" WorkloadEndpoint="ip--172--31--16--55-k8s-whisker--cbdd468db--lnm8s-eth0" Dec 12 17:27:56.647760 containerd[2012]: 2025-12-12 17:27:56.605 [INFO][4626] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b" Namespace="calico-system" Pod="whisker-cbdd468db-lnm8s" WorkloadEndpoint="ip--172--31--16--55-k8s-whisker--cbdd468db--lnm8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--55-k8s-whisker--cbdd468db--lnm8s-eth0", GenerateName:"whisker-cbdd468db-", Namespace:"calico-system", SelfLink:"", UID:"3ec49c85-5274-4ed7-b914-fc08a271b46e", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cbdd468db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-55", ContainerID:"80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b", Pod:"whisker-cbdd468db-lnm8s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.110.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8baae981d08", MAC:"fe:eb:da:48:ff:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:56.648055 containerd[2012]: 2025-12-12 17:27:56.638 [INFO][4626] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b" Namespace="calico-system" Pod="whisker-cbdd468db-lnm8s" WorkloadEndpoint="ip--172--31--16--55-k8s-whisker--cbdd468db--lnm8s-eth0" Dec 12 17:27:56.690141 containerd[2012]: time="2025-12-12T17:27:56.689922210Z" level=info msg="connecting to shim 80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b" address="unix:///run/containerd/s/c10b5f5e034459b1550b1d6c2ff17e990501d58b83c10cbe99c275fda50cd7f9" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:56.741215 systemd[1]: Started cri-containerd-80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b.scope - libcontainer container 80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b. Dec 12 17:27:56.764000 audit: BPF prog-id=182 op=LOAD Dec 12 17:27:56.765000 audit: BPF prog-id=183 op=LOAD Dec 12 17:27:56.765000 audit[4671]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400018c180 a2=98 a3=0 items=0 ppid=4660 pid=4671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:56.765000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830613663386239346463353138336463316536326239633330616239 Dec 12 17:27:56.765000 audit: BPF prog-id=183 op=UNLOAD Dec 12 17:27:56.765000 audit[4671]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4660 pid=4671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:56.765000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830613663386239346463353138336463316536326239633330616239 Dec 12 17:27:56.766000 audit: BPF prog-id=184 op=LOAD Dec 12 17:27:56.766000 audit[4671]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400018c3e8 a2=98 a3=0 items=0 ppid=4660 pid=4671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:56.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830613663386239346463353138336463316536326239633330616239 Dec 12 17:27:56.766000 audit: BPF prog-id=185 op=LOAD Dec 12 17:27:56.766000 audit[4671]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400018c168 a2=98 a3=0 items=0 ppid=4660 pid=4671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:56.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830613663386239346463353138336463316536326239633330616239 Dec 12 17:27:56.766000 audit: BPF prog-id=185 op=UNLOAD Dec 12 17:27:56.766000 audit[4671]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4660 pid=4671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:56.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830613663386239346463353138336463316536326239633330616239 Dec 12 17:27:56.766000 audit: BPF prog-id=184 op=UNLOAD Dec 12 17:27:56.766000 audit[4671]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4660 pid=4671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:56.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830613663386239346463353138336463316536326239633330616239 Dec 12 17:27:56.766000 audit: BPF prog-id=186 op=LOAD Dec 12 17:27:56.766000 audit[4671]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400018c648 a2=98 a3=0 items=0 ppid=4660 pid=4671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:56.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830613663386239346463353138336463316536326239633330616239 Dec 12 17:27:56.824158 containerd[2012]: time="2025-12-12T17:27:56.824041099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cbdd468db-lnm8s,Uid:3ec49c85-5274-4ed7-b914-fc08a271b46e,Namespace:calico-system,Attempt:0,} returns sandbox id \"80a6c8b94dc5183dc1e62b9c30ab96a5e58d0420b230cac569210a0fb20cfb4b\"" Dec 12 17:27:56.829384 containerd[2012]: time="2025-12-12T17:27:56.829302343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:27:57.097047 containerd[2012]: time="2025-12-12T17:27:57.096959848Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:57.099209 containerd[2012]: time="2025-12-12T17:27:57.099117748Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:27:57.099355 containerd[2012]: time="2025-12-12T17:27:57.099251380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:57.100611 kubelet[3325]: E1212 17:27:57.100133 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:27:57.100611 kubelet[3325]: E1212 17:27:57.100211 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:27:57.100611 kubelet[3325]: E1212 17:27:57.100352 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-cbdd468db-lnm8s_calico-system(3ec49c85-5274-4ed7-b914-fc08a271b46e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:57.103817 containerd[2012]: time="2025-12-12T17:27:57.103487992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:27:57.381349 containerd[2012]: time="2025-12-12T17:27:57.381091110Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:57.390005 containerd[2012]: time="2025-12-12T17:27:57.389913042Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:27:57.390241 containerd[2012]: time="2025-12-12T17:27:57.389970750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:57.391316 kubelet[3325]: E1212 17:27:57.391155 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:27:57.391316 kubelet[3325]: E1212 17:27:57.391259 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:27:57.391813 kubelet[3325]: E1212 17:27:57.391434 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-cbdd468db-lnm8s_calico-system(3ec49c85-5274-4ed7-b914-fc08a271b46e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:57.392080 kubelet[3325]: E1212 17:27:57.391903 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cbdd468db-lnm8s" podUID="3ec49c85-5274-4ed7-b914-fc08a271b46e" Dec 12 17:27:57.435881 kubelet[3325]: I1212 17:27:57.434500 3325 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11ece337-6017-4f67-9160-e5ad30878870" path="/var/lib/kubelet/pods/11ece337-6017-4f67-9160-e5ad30878870/volumes" Dec 12 17:27:57.843376 kubelet[3325]: E1212 17:27:57.842828 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cbdd468db-lnm8s" podUID="3ec49c85-5274-4ed7-b914-fc08a271b46e" Dec 12 17:27:57.903000 audit[4807]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=4807 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:57.903000 audit[4807]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffccda6120 a2=0 a3=1 items=0 ppid=3625 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:57.903000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:57.911000 audit[4807]: NETFILTER_CFG table=nat:120 family=2 entries=14 op=nft_register_rule pid=4807 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:57.911000 audit[4807]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffccda6120 a2=0 a3=1 items=0 ppid=3625 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:57.911000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:57.989000 audit: BPF prog-id=187 op=LOAD Dec 12 17:27:57.989000 audit[4820]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe87b23f8 a2=98 a3=ffffe87b23e8 items=0 ppid=4709 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:57.989000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:27:57.990000 audit: BPF prog-id=187 op=UNLOAD Dec 12 17:27:57.990000 audit[4820]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe87b23c8 a3=0 items=0 ppid=4709 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:57.990000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:27:57.990000 audit: BPF prog-id=188 op=LOAD Dec 12 17:27:57.990000 audit[4820]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe87b22a8 a2=74 a3=95 items=0 ppid=4709 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:57.990000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:27:57.991000 audit: BPF prog-id=188 op=UNLOAD Dec 12 17:27:57.991000 audit[4820]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4709 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:57.991000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:27:57.991000 audit: BPF prog-id=189 op=LOAD Dec 12 17:27:57.991000 audit[4820]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe87b22d8 a2=40 a3=ffffe87b2308 items=0 ppid=4709 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:57.991000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:27:57.991000 audit: BPF prog-id=189 op=UNLOAD Dec 12 17:27:57.991000 audit[4820]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffe87b2308 items=0 ppid=4709 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:57.991000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:27:57.995000 audit: BPF prog-id=190 op=LOAD Dec 12 17:27:57.995000 audit[4821]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff6baf748 a2=98 a3=fffff6baf738 items=0 ppid=4709 pid=4821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:57.995000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:57.995000 audit: BPF prog-id=190 op=UNLOAD Dec 12 17:27:57.995000 audit[4821]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff6baf718 a3=0 items=0 ppid=4709 pid=4821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:57.995000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:57.995000 audit: BPF prog-id=191 op=LOAD Dec 12 17:27:57.995000 audit[4821]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff6baf3d8 a2=74 a3=95 items=0 ppid=4709 pid=4821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:57.995000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:57.995000 audit: BPF prog-id=191 op=UNLOAD Dec 12 17:27:57.995000 audit[4821]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4709 pid=4821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:57.995000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:57.995000 audit: BPF prog-id=192 op=LOAD Dec 12 17:27:57.995000 audit[4821]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff6baf438 a2=94 a3=2 items=0 ppid=4709 pid=4821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:57.995000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:57.995000 audit: BPF prog-id=192 op=UNLOAD Dec 12 17:27:57.995000 audit[4821]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4709 pid=4821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:57.995000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:58.204000 audit: BPF prog-id=193 op=LOAD Dec 12 17:27:58.204000 audit[4821]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff6baf3f8 a2=40 a3=fffff6baf428 items=0 ppid=4709 pid=4821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.204000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:58.204000 audit: BPF prog-id=193 op=UNLOAD Dec 12 17:27:58.204000 audit[4821]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffff6baf428 items=0 ppid=4709 pid=4821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.204000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:58.224000 audit: BPF prog-id=194 op=LOAD Dec 12 17:27:58.224000 audit[4821]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff6baf408 a2=94 a3=4 items=0 ppid=4709 pid=4821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.224000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:58.224000 audit: BPF prog-id=194 op=UNLOAD Dec 12 17:27:58.224000 audit[4821]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4709 pid=4821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.224000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:58.225000 audit: BPF prog-id=195 op=LOAD Dec 12 17:27:58.225000 audit[4821]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff6baf248 a2=94 a3=5 items=0 ppid=4709 pid=4821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.225000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:58.225000 audit: BPF prog-id=195 op=UNLOAD Dec 12 17:27:58.225000 audit[4821]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4709 pid=4821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.225000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:58.225000 audit: BPF prog-id=196 op=LOAD Dec 12 17:27:58.225000 audit[4821]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff6baf478 a2=94 a3=6 items=0 ppid=4709 pid=4821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.225000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:58.225000 audit: BPF prog-id=196 op=UNLOAD Dec 12 17:27:58.225000 audit[4821]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4709 pid=4821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.225000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:58.226000 audit: BPF prog-id=197 op=LOAD Dec 12 17:27:58.226000 audit[4821]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff6baec48 a2=94 a3=83 items=0 ppid=4709 pid=4821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.226000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:58.226000 audit: BPF prog-id=198 op=LOAD Dec 12 17:27:58.226000 audit[4821]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffff6baea08 a2=94 a3=2 items=0 ppid=4709 pid=4821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.226000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:58.226000 audit: BPF prog-id=198 op=UNLOAD Dec 12 17:27:58.226000 audit[4821]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4709 pid=4821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.226000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:58.227000 audit: BPF prog-id=197 op=UNLOAD Dec 12 17:27:58.227000 audit[4821]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=85d6620 a3=85c9b00 items=0 ppid=4709 pid=4821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.227000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:58.251000 audit: BPF prog-id=199 op=LOAD Dec 12 17:27:58.251000 audit[4826]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdad1fd98 a2=98 a3=ffffdad1fd88 items=0 ppid=4709 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.251000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:27:58.251000 audit: BPF prog-id=199 op=UNLOAD Dec 12 17:27:58.251000 audit[4826]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffdad1fd68 a3=0 items=0 ppid=4709 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.251000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:27:58.251000 audit: BPF prog-id=200 op=LOAD Dec 12 17:27:58.251000 audit[4826]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdad1fc48 a2=74 a3=95 items=0 ppid=4709 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.251000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:27:58.251000 audit: BPF prog-id=200 op=UNLOAD Dec 12 17:27:58.251000 audit[4826]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4709 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.251000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:27:58.251000 audit: BPF prog-id=201 op=LOAD Dec 12 17:27:58.251000 audit[4826]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdad1fc78 a2=40 a3=ffffdad1fca8 items=0 ppid=4709 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.251000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:27:58.251000 audit: BPF prog-id=201 op=UNLOAD Dec 12 17:27:58.251000 audit[4826]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffdad1fca8 items=0 ppid=4709 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.251000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:27:58.275338 systemd-networkd[1585]: cali8baae981d08: Gained IPv6LL Dec 12 17:27:58.416641 systemd-networkd[1585]: vxlan.calico: Link UP Dec 12 17:27:58.417146 systemd-networkd[1585]: vxlan.calico: Gained carrier Dec 12 17:27:58.430242 containerd[2012]: time="2025-12-12T17:27:58.430192471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5lq7k,Uid:3db1076d-3ef5-41a5-8a08-117779fb2cae,Namespace:kube-system,Attempt:0,}" Dec 12 17:27:58.488108 (udev-worker)[4545]: Network interface NamePolicy= disabled on kernel command line. Dec 12 17:27:58.488000 audit: BPF prog-id=202 op=LOAD Dec 12 17:27:58.488000 audit[4860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd5c20e68 a2=98 a3=ffffd5c20e58 items=0 ppid=4709 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.488000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:58.490000 audit: BPF prog-id=202 op=UNLOAD Dec 12 17:27:58.490000 audit[4860]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd5c20e38 a3=0 items=0 ppid=4709 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.490000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:58.490000 audit: BPF prog-id=203 op=LOAD Dec 12 17:27:58.490000 audit[4860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd5c20b48 a2=74 a3=95 items=0 ppid=4709 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.490000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:58.490000 audit: BPF prog-id=203 op=UNLOAD Dec 12 17:27:58.490000 audit[4860]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4709 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.490000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:58.490000 audit: BPF prog-id=204 op=LOAD Dec 12 17:27:58.490000 audit[4860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd5c20ba8 a2=94 a3=2 items=0 ppid=4709 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.490000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:58.490000 audit: BPF prog-id=204 op=UNLOAD Dec 12 17:27:58.490000 audit[4860]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4709 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.490000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:58.490000 audit: BPF prog-id=205 op=LOAD Dec 12 17:27:58.490000 audit[4860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd5c20a28 a2=40 a3=ffffd5c20a58 items=0 ppid=4709 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.490000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:58.490000 audit: BPF prog-id=205 op=UNLOAD Dec 12 17:27:58.490000 audit[4860]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffd5c20a58 items=0 ppid=4709 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.490000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:58.490000 audit: BPF prog-id=206 op=LOAD Dec 12 17:27:58.490000 audit[4860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd5c20b78 a2=94 a3=b7 items=0 ppid=4709 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.490000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:58.490000 audit: BPF prog-id=206 op=UNLOAD Dec 12 17:27:58.490000 audit[4860]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4709 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.490000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:58.491000 audit: BPF prog-id=207 op=LOAD Dec 12 17:27:58.491000 audit[4860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd5c20228 a2=94 a3=2 items=0 ppid=4709 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.491000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:58.494000 audit: BPF prog-id=207 op=UNLOAD Dec 12 17:27:58.494000 audit[4860]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4709 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.494000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:58.494000 audit: BPF prog-id=208 op=LOAD Dec 12 17:27:58.494000 audit[4860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd5c203b8 a2=94 a3=30 items=0 ppid=4709 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.494000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:58.506000 audit: BPF prog-id=209 op=LOAD Dec 12 17:27:58.506000 audit[4864]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc5a16348 a2=98 a3=ffffc5a16338 items=0 ppid=4709 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.506000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:58.506000 audit: BPF prog-id=209 op=UNLOAD Dec 12 17:27:58.506000 audit[4864]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc5a16318 a3=0 items=0 ppid=4709 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.506000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:58.507000 audit: BPF prog-id=210 op=LOAD Dec 12 17:27:58.507000 audit[4864]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc5a15fd8 a2=74 a3=95 items=0 ppid=4709 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.507000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:58.507000 audit: BPF prog-id=210 op=UNLOAD Dec 12 17:27:58.507000 audit[4864]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4709 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.507000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:58.508000 audit: BPF prog-id=211 op=LOAD Dec 12 17:27:58.508000 audit[4864]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc5a16038 a2=94 a3=2 items=0 ppid=4709 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.508000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:58.508000 audit: BPF prog-id=211 op=UNLOAD Dec 12 17:27:58.508000 audit[4864]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4709 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.508000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:58.720179 systemd-networkd[1585]: calif70b3f1480a: Link UP Dec 12 17:27:58.722217 systemd-networkd[1585]: calif70b3f1480a: Gained carrier Dec 12 17:27:58.756418 containerd[2012]: 2025-12-12 17:27:58.572 [INFO][4846] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--55-k8s-coredns--66bc5c9577--5lq7k-eth0 coredns-66bc5c9577- kube-system 3db1076d-3ef5-41a5-8a08-117779fb2cae 881 0 2025-12-12 17:27:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-55 coredns-66bc5c9577-5lq7k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif70b3f1480a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173" Namespace="kube-system" Pod="coredns-66bc5c9577-5lq7k" WorkloadEndpoint="ip--172--31--16--55-k8s-coredns--66bc5c9577--5lq7k-" Dec 12 17:27:58.756418 containerd[2012]: 2025-12-12 17:27:58.572 [INFO][4846] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173" Namespace="kube-system" Pod="coredns-66bc5c9577-5lq7k" WorkloadEndpoint="ip--172--31--16--55-k8s-coredns--66bc5c9577--5lq7k-eth0" Dec 12 17:27:58.756418 containerd[2012]: 2025-12-12 17:27:58.635 [INFO][4871] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173" HandleID="k8s-pod-network.c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173" Workload="ip--172--31--16--55-k8s-coredns--66bc5c9577--5lq7k-eth0" Dec 12 17:27:58.756757 containerd[2012]: 2025-12-12 17:27:58.635 [INFO][4871] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173" HandleID="k8s-pod-network.c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173" Workload="ip--172--31--16--55-k8s-coredns--66bc5c9577--5lq7k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b5d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-55", "pod":"coredns-66bc5c9577-5lq7k", "timestamp":"2025-12-12 17:27:58.635066144 +0000 UTC"}, Hostname:"ip-172-31-16-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:27:58.756757 containerd[2012]: 2025-12-12 17:27:58.635 [INFO][4871] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:27:58.756757 containerd[2012]: 2025-12-12 17:27:58.635 [INFO][4871] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:27:58.756757 containerd[2012]: 2025-12-12 17:27:58.635 [INFO][4871] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-55' Dec 12 17:27:58.756757 containerd[2012]: 2025-12-12 17:27:58.653 [INFO][4871] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173" host="ip-172-31-16-55" Dec 12 17:27:58.756757 containerd[2012]: 2025-12-12 17:27:58.662 [INFO][4871] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-55" Dec 12 17:27:58.756757 containerd[2012]: 2025-12-12 17:27:58.671 [INFO][4871] ipam/ipam.go 511: Trying affinity for 192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:27:58.756757 containerd[2012]: 2025-12-12 17:27:58.674 [INFO][4871] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:27:58.756757 containerd[2012]: 2025-12-12 17:27:58.679 [INFO][4871] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:27:58.756757 containerd[2012]: 2025-12-12 17:27:58.679 [INFO][4871] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.110.64/26 handle="k8s-pod-network.c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173" host="ip-172-31-16-55" Dec 12 17:27:58.758765 containerd[2012]: 2025-12-12 17:27:58.683 [INFO][4871] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173 Dec 12 17:27:58.758765 containerd[2012]: 2025-12-12 17:27:58.691 [INFO][4871] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.110.64/26 handle="k8s-pod-network.c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173" host="ip-172-31-16-55" Dec 12 17:27:58.758765 containerd[2012]: 2025-12-12 17:27:58.705 [INFO][4871] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.110.66/26] block=192.168.110.64/26 handle="k8s-pod-network.c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173" host="ip-172-31-16-55" Dec 12 17:27:58.758765 containerd[2012]: 2025-12-12 17:27:58.705 [INFO][4871] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.66/26] handle="k8s-pod-network.c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173" host="ip-172-31-16-55" Dec 12 17:27:58.758765 containerd[2012]: 2025-12-12 17:27:58.706 [INFO][4871] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:27:58.758765 containerd[2012]: 2025-12-12 17:27:58.706 [INFO][4871] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.110.66/26] IPv6=[] ContainerID="c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173" HandleID="k8s-pod-network.c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173" Workload="ip--172--31--16--55-k8s-coredns--66bc5c9577--5lq7k-eth0" Dec 12 17:27:58.759969 containerd[2012]: 2025-12-12 17:27:58.715 [INFO][4846] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173" Namespace="kube-system" Pod="coredns-66bc5c9577-5lq7k" WorkloadEndpoint="ip--172--31--16--55-k8s-coredns--66bc5c9577--5lq7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--55-k8s-coredns--66bc5c9577--5lq7k-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3db1076d-3ef5-41a5-8a08-117779fb2cae", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-55", ContainerID:"", Pod:"coredns-66bc5c9577-5lq7k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif70b3f1480a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:58.759969 containerd[2012]: 2025-12-12 17:27:58.715 [INFO][4846] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.66/32] ContainerID="c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173" Namespace="kube-system" Pod="coredns-66bc5c9577-5lq7k" WorkloadEndpoint="ip--172--31--16--55-k8s-coredns--66bc5c9577--5lq7k-eth0" Dec 12 17:27:58.759969 containerd[2012]: 2025-12-12 17:27:58.715 [INFO][4846] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif70b3f1480a ContainerID="c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173" Namespace="kube-system" Pod="coredns-66bc5c9577-5lq7k" WorkloadEndpoint="ip--172--31--16--55-k8s-coredns--66bc5c9577--5lq7k-eth0" Dec 12 17:27:58.759969 containerd[2012]: 2025-12-12 17:27:58.720 [INFO][4846] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173" Namespace="kube-system" Pod="coredns-66bc5c9577-5lq7k" WorkloadEndpoint="ip--172--31--16--55-k8s-coredns--66bc5c9577--5lq7k-eth0" Dec 12 17:27:58.759969 containerd[2012]: 2025-12-12 17:27:58.721 [INFO][4846] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173" Namespace="kube-system" Pod="coredns-66bc5c9577-5lq7k" WorkloadEndpoint="ip--172--31--16--55-k8s-coredns--66bc5c9577--5lq7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--55-k8s-coredns--66bc5c9577--5lq7k-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3db1076d-3ef5-41a5-8a08-117779fb2cae", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-55", ContainerID:"c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173", Pod:"coredns-66bc5c9577-5lq7k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif70b3f1480a", MAC:"d2:5d:2e:e3:ba:d8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:58.759969 containerd[2012]: 2025-12-12 17:27:58.748 [INFO][4846] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173" Namespace="kube-system" Pod="coredns-66bc5c9577-5lq7k" WorkloadEndpoint="ip--172--31--16--55-k8s-coredns--66bc5c9577--5lq7k-eth0" Dec 12 17:27:58.807729 containerd[2012]: time="2025-12-12T17:27:58.807671613Z" level=info msg="connecting to shim c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173" address="unix:///run/containerd/s/c9c8dabe3429b3770b51881f87fd4d81cbfea865f3439606815845e18044b879" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:58.851555 kubelet[3325]: E1212 17:27:58.851427 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cbdd468db-lnm8s" podUID="3ec49c85-5274-4ed7-b914-fc08a271b46e" Dec 12 17:27:58.884314 systemd[1]: Started cri-containerd-c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173.scope - libcontainer container c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173. Dec 12 17:27:58.918000 audit: BPF prog-id=212 op=LOAD Dec 12 17:27:58.919000 audit: BPF prog-id=213 op=LOAD Dec 12 17:27:58.919000 audit[4902]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4891 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339326662613437643263643066336638656237663535616437353662 Dec 12 17:27:58.919000 audit: BPF prog-id=213 op=UNLOAD Dec 12 17:27:58.919000 audit[4902]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4891 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339326662613437643263643066336638656237663535616437353662 Dec 12 17:27:58.919000 audit: BPF prog-id=214 op=LOAD Dec 12 17:27:58.919000 audit[4902]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4891 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339326662613437643263643066336638656237663535616437353662 Dec 12 17:27:58.920000 audit: BPF prog-id=215 op=LOAD Dec 12 17:27:58.920000 audit[4902]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4891 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339326662613437643263643066336638656237663535616437353662 Dec 12 17:27:58.920000 audit: BPF prog-id=215 op=UNLOAD Dec 12 17:27:58.920000 audit[4902]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4891 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339326662613437643263643066336638656237663535616437353662 Dec 12 17:27:58.920000 audit: BPF prog-id=214 op=UNLOAD Dec 12 17:27:58.920000 audit[4902]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4891 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339326662613437643263643066336638656237663535616437353662 Dec 12 17:27:58.920000 audit: BPF prog-id=216 op=LOAD Dec 12 17:27:58.920000 audit[4902]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4891 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339326662613437643263643066336638656237663535616437353662 Dec 12 17:27:58.942000 audit: BPF prog-id=217 op=LOAD Dec 12 17:27:58.942000 audit[4864]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc5a15ff8 a2=40 a3=ffffc5a16028 items=0 ppid=4709 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.942000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:58.943000 audit: BPF prog-id=217 op=UNLOAD Dec 12 17:27:58.943000 audit[4864]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffc5a16028 items=0 ppid=4709 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.943000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:58.978000 audit: BPF prog-id=218 op=LOAD Dec 12 17:27:58.978000 audit[4864]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc5a16008 a2=94 a3=4 items=0 ppid=4709 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.978000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:58.978000 audit: BPF prog-id=218 op=UNLOAD Dec 12 17:27:58.978000 audit[4864]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4709 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.978000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:58.979000 audit: BPF prog-id=219 op=LOAD Dec 12 17:27:58.979000 audit[4864]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc5a15e48 a2=94 a3=5 items=0 ppid=4709 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.979000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:58.979000 audit: BPF prog-id=219 op=UNLOAD Dec 12 17:27:58.979000 audit[4864]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4709 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.979000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:58.980000 audit: BPF prog-id=220 op=LOAD Dec 12 17:27:58.980000 audit[4864]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc5a16078 a2=94 a3=6 items=0 ppid=4709 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.980000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:58.981000 audit: BPF prog-id=220 op=UNLOAD Dec 12 17:27:58.981000 audit[4864]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4709 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.981000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:58.982000 audit: BPF prog-id=221 op=LOAD Dec 12 17:27:58.982000 audit[4864]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc5a15848 a2=94 a3=83 items=0 ppid=4709 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.982000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:58.984000 audit: BPF prog-id=222 op=LOAD Dec 12 17:27:58.984000 audit[4864]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffc5a15608 a2=94 a3=2 items=0 ppid=4709 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.984000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:58.984000 audit: BPF prog-id=222 op=UNLOAD Dec 12 17:27:58.984000 audit[4864]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4709 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.984000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:58.986000 audit: BPF prog-id=221 op=UNLOAD Dec 12 17:27:58.986000 audit[4864]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=207da620 a3=207cdb00 items=0 ppid=4709 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.986000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:58.992365 containerd[2012]: time="2025-12-12T17:27:58.992300050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5lq7k,Uid:3db1076d-3ef5-41a5-8a08-117779fb2cae,Namespace:kube-system,Attempt:0,} returns sandbox id \"c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173\"" Dec 12 17:27:58.999000 audit: BPF prog-id=208 op=UNLOAD Dec 12 17:27:58.999000 audit[4709]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=40007b0800 a2=0 a3=0 items=0 ppid=4697 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.999000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 12 17:27:59.008416 containerd[2012]: time="2025-12-12T17:27:59.008172882Z" level=info msg="CreateContainer within sandbox \"c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:27:59.044360 containerd[2012]: time="2025-12-12T17:27:59.044292930Z" level=info msg="Container 02ffe1b65c9880aedc35fe978d97c62af146865412967e712a9dd04c32b7356f: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:27:59.047221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1001114141.mount: Deactivated successfully. Dec 12 17:27:59.061264 containerd[2012]: time="2025-12-12T17:27:59.061180602Z" level=info msg="CreateContainer within sandbox \"c92fba47d2cd0f3f8eb7f55ad756bf38fe7b2df7e0e19bdf82ce30bed5b07173\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"02ffe1b65c9880aedc35fe978d97c62af146865412967e712a9dd04c32b7356f\"" Dec 12 17:27:59.063839 containerd[2012]: time="2025-12-12T17:27:59.063775530Z" level=info msg="StartContainer for \"02ffe1b65c9880aedc35fe978d97c62af146865412967e712a9dd04c32b7356f\"" Dec 12 17:27:59.071226 containerd[2012]: time="2025-12-12T17:27:59.071098602Z" level=info msg="connecting to shim 02ffe1b65c9880aedc35fe978d97c62af146865412967e712a9dd04c32b7356f" address="unix:///run/containerd/s/c9c8dabe3429b3770b51881f87fd4d81cbfea865f3439606815845e18044b879" protocol=ttrpc version=3 Dec 12 17:27:59.133511 systemd[1]: Started cri-containerd-02ffe1b65c9880aedc35fe978d97c62af146865412967e712a9dd04c32b7356f.scope - libcontainer container 02ffe1b65c9880aedc35fe978d97c62af146865412967e712a9dd04c32b7356f. Dec 12 17:27:59.168000 audit: BPF prog-id=223 op=LOAD Dec 12 17:27:59.170000 audit: BPF prog-id=224 op=LOAD Dec 12 17:27:59.170000 audit[4935]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4891 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:59.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032666665316236356339383830616564633335666539373864393763 Dec 12 17:27:59.171000 audit: BPF prog-id=224 op=UNLOAD Dec 12 17:27:59.171000 audit[4935]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4891 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:59.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032666665316236356339383830616564633335666539373864393763 Dec 12 17:27:59.171000 audit: BPF prog-id=225 op=LOAD Dec 12 17:27:59.171000 audit[4935]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4891 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:59.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032666665316236356339383830616564633335666539373864393763 Dec 12 17:27:59.172000 audit: BPF prog-id=226 op=LOAD Dec 12 17:27:59.172000 audit[4935]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4891 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:59.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032666665316236356339383830616564633335666539373864393763 Dec 12 17:27:59.172000 audit: BPF prog-id=226 op=UNLOAD Dec 12 17:27:59.172000 audit[4935]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4891 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:59.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032666665316236356339383830616564633335666539373864393763 Dec 12 17:27:59.172000 audit: BPF prog-id=225 op=UNLOAD Dec 12 17:27:59.172000 audit[4935]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4891 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:59.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032666665316236356339383830616564633335666539373864393763 Dec 12 17:27:59.172000 audit: BPF prog-id=227 op=LOAD Dec 12 17:27:59.172000 audit[4935]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4891 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:59.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032666665316236356339383830616564633335666539373864393763 Dec 12 17:27:59.223452 containerd[2012]: time="2025-12-12T17:27:59.222938719Z" level=info msg="StartContainer for \"02ffe1b65c9880aedc35fe978d97c62af146865412967e712a9dd04c32b7356f\" returns successfully" Dec 12 17:27:59.253000 audit[4980]: NETFILTER_CFG table=nat:121 family=2 entries=15 op=nft_register_chain pid=4980 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:59.253000 audit[4980]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffd9dda8b0 a2=0 a3=ffffb4d78fa8 items=0 ppid=4709 pid=4980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:59.253000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:59.274000 audit[4982]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=4982 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:59.274000 audit[4982]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffd815eba0 a2=0 a3=ffff84b57fa8 items=0 ppid=4709 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:59.274000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:59.307000 audit[4981]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4981 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:59.307000 audit[4981]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffc17be450 a2=0 a3=ffffa95e0fa8 items=0 ppid=4709 pid=4981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:59.307000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:59.326000 audit[4984]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4984 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:59.326000 audit[4984]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffd1156500 a2=0 a3=ffff9d15cfa8 items=0 ppid=4709 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:59.326000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:59.417000 audit[4997]: NETFILTER_CFG table=filter:125 family=2 entries=42 op=nft_register_chain pid=4997 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:59.417000 audit[4997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22552 a0=3 a1=fffff220f390 a2=0 a3=ffffb9e9afa8 items=0 ppid=4709 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:59.417000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:59.490908 systemd-networkd[1585]: vxlan.calico: Gained IPv6LL Dec 12 17:27:59.877158 kubelet[3325]: I1212 17:27:59.877029 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5lq7k" podStartSLOduration=49.87700513 podStartE2EDuration="49.87700513s" podCreationTimestamp="2025-12-12 17:27:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:27:59.876785206 +0000 UTC m=+54.899237866" watchObservedRunningTime="2025-12-12 17:27:59.87700513 +0000 UTC m=+54.899457790" Dec 12 17:27:59.919000 audit[5000]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=5000 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:59.922030 kernel: kauditd_printk_skb: 278 callbacks suppressed Dec 12 17:27:59.922192 kernel: audit: type=1325 audit(1765560479.919:686): table=filter:126 family=2 entries=20 op=nft_register_rule pid=5000 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:59.919000 audit[5000]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc62fc290 a2=0 a3=1 items=0 ppid=3625 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:59.933290 kernel: audit: type=1300 audit(1765560479.919:686): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc62fc290 a2=0 a3=1 items=0 ppid=3625 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:59.919000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:59.936387 kernel: audit: type=1327 audit(1765560479.919:686): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:59.937000 audit[5000]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=5000 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:59.943936 kernel: audit: type=1325 audit(1765560479.937:687): table=nat:127 family=2 entries=14 op=nft_register_rule pid=5000 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:59.937000 audit[5000]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffc62fc290 a2=0 a3=1 items=0 ppid=3625 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:59.937000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:59.953774 kernel: audit: type=1300 audit(1765560479.937:687): arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffc62fc290 a2=0 a3=1 items=0 ppid=3625 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:59.954247 kernel: audit: type=1327 audit(1765560479.937:687): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:00.130476 systemd-networkd[1585]: calif70b3f1480a: Gained IPv6LL Dec 12 17:28:00.427313 containerd[2012]: time="2025-12-12T17:28:00.427120413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcd4b4759-jhpmk,Uid:146b6197-b092-48f2-948f-08d710a51bd7,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:28:00.429419 containerd[2012]: time="2025-12-12T17:28:00.429326517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cqttw,Uid:da236b25-6353-4685-ace8-c9064e3e4481,Namespace:kube-system,Attempt:0,}" Dec 12 17:28:00.431106 containerd[2012]: time="2025-12-12T17:28:00.431021361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bc6ffc4-bklww,Uid:334ae9c1-5a14-4018-8c8f-986d294ed109,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:28:00.802724 systemd-networkd[1585]: califa7880f10c5: Link UP Dec 12 17:28:00.804303 systemd-networkd[1585]: califa7880f10c5: Gained carrier Dec 12 17:28:00.839080 containerd[2012]: 2025-12-12 17:28:00.628 [INFO][5010] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--55-k8s-coredns--66bc5c9577--cqttw-eth0 coredns-66bc5c9577- kube-system da236b25-6353-4685-ace8-c9064e3e4481 883 0 2025-12-12 17:27:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-55 coredns-66bc5c9577-cqttw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califa7880f10c5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7" Namespace="kube-system" Pod="coredns-66bc5c9577-cqttw" WorkloadEndpoint="ip--172--31--16--55-k8s-coredns--66bc5c9577--cqttw-" Dec 12 17:28:00.839080 containerd[2012]: 2025-12-12 17:28:00.628 [INFO][5010] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7" Namespace="kube-system" Pod="coredns-66bc5c9577-cqttw" WorkloadEndpoint="ip--172--31--16--55-k8s-coredns--66bc5c9577--cqttw-eth0" Dec 12 17:28:00.839080 containerd[2012]: 2025-12-12 17:28:00.709 [INFO][5040] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7" HandleID="k8s-pod-network.9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7" Workload="ip--172--31--16--55-k8s-coredns--66bc5c9577--cqttw-eth0" Dec 12 17:28:00.839080 containerd[2012]: 2025-12-12 17:28:00.709 [INFO][5040] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7" HandleID="k8s-pod-network.9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7" Workload="ip--172--31--16--55-k8s-coredns--66bc5c9577--cqttw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3880), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-55", "pod":"coredns-66bc5c9577-cqttw", "timestamp":"2025-12-12 17:28:00.709098478 +0000 UTC"}, Hostname:"ip-172-31-16-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:28:00.839080 containerd[2012]: 2025-12-12 17:28:00.709 [INFO][5040] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:28:00.839080 containerd[2012]: 2025-12-12 17:28:00.709 [INFO][5040] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:28:00.839080 containerd[2012]: 2025-12-12 17:28:00.709 [INFO][5040] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-55' Dec 12 17:28:00.839080 containerd[2012]: 2025-12-12 17:28:00.726 [INFO][5040] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7" host="ip-172-31-16-55" Dec 12 17:28:00.839080 containerd[2012]: 2025-12-12 17:28:00.738 [INFO][5040] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-55" Dec 12 17:28:00.839080 containerd[2012]: 2025-12-12 17:28:00.752 [INFO][5040] ipam/ipam.go 511: Trying affinity for 192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:28:00.839080 containerd[2012]: 2025-12-12 17:28:00.759 [INFO][5040] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:28:00.839080 containerd[2012]: 2025-12-12 17:28:00.765 [INFO][5040] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:28:00.839080 containerd[2012]: 2025-12-12 17:28:00.766 [INFO][5040] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.110.64/26 handle="k8s-pod-network.9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7" host="ip-172-31-16-55" Dec 12 17:28:00.839080 containerd[2012]: 2025-12-12 17:28:00.771 [INFO][5040] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7 Dec 12 17:28:00.839080 containerd[2012]: 2025-12-12 17:28:00.780 [INFO][5040] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.110.64/26 handle="k8s-pod-network.9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7" host="ip-172-31-16-55" Dec 12 17:28:00.839080 containerd[2012]: 2025-12-12 17:28:00.789 [INFO][5040] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.110.67/26] block=192.168.110.64/26 handle="k8s-pod-network.9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7" host="ip-172-31-16-55" Dec 12 17:28:00.839080 containerd[2012]: 2025-12-12 17:28:00.790 [INFO][5040] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.67/26] handle="k8s-pod-network.9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7" host="ip-172-31-16-55" Dec 12 17:28:00.839080 containerd[2012]: 2025-12-12 17:28:00.790 [INFO][5040] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:28:00.839080 containerd[2012]: 2025-12-12 17:28:00.790 [INFO][5040] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.110.67/26] IPv6=[] ContainerID="9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7" HandleID="k8s-pod-network.9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7" Workload="ip--172--31--16--55-k8s-coredns--66bc5c9577--cqttw-eth0" Dec 12 17:28:00.841462 containerd[2012]: 2025-12-12 17:28:00.796 [INFO][5010] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7" Namespace="kube-system" Pod="coredns-66bc5c9577-cqttw" WorkloadEndpoint="ip--172--31--16--55-k8s-coredns--66bc5c9577--cqttw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--55-k8s-coredns--66bc5c9577--cqttw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"da236b25-6353-4685-ace8-c9064e3e4481", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-55", ContainerID:"", Pod:"coredns-66bc5c9577-cqttw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa7880f10c5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:00.841462 containerd[2012]: 2025-12-12 17:28:00.796 [INFO][5010] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.67/32] ContainerID="9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7" Namespace="kube-system" Pod="coredns-66bc5c9577-cqttw" WorkloadEndpoint="ip--172--31--16--55-k8s-coredns--66bc5c9577--cqttw-eth0" Dec 12 17:28:00.841462 containerd[2012]: 2025-12-12 17:28:00.796 [INFO][5010] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa7880f10c5 ContainerID="9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7" Namespace="kube-system" Pod="coredns-66bc5c9577-cqttw" WorkloadEndpoint="ip--172--31--16--55-k8s-coredns--66bc5c9577--cqttw-eth0" Dec 12 17:28:00.841462 containerd[2012]: 2025-12-12 17:28:00.805 [INFO][5010] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7" Namespace="kube-system" Pod="coredns-66bc5c9577-cqttw" WorkloadEndpoint="ip--172--31--16--55-k8s-coredns--66bc5c9577--cqttw-eth0" Dec 12 17:28:00.841462 containerd[2012]: 2025-12-12 17:28:00.808 [INFO][5010] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7" Namespace="kube-system" Pod="coredns-66bc5c9577-cqttw" WorkloadEndpoint="ip--172--31--16--55-k8s-coredns--66bc5c9577--cqttw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--55-k8s-coredns--66bc5c9577--cqttw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"da236b25-6353-4685-ace8-c9064e3e4481", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-55", ContainerID:"9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7", Pod:"coredns-66bc5c9577-cqttw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa7880f10c5", MAC:"36:1c:a2:c9:77:fa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:00.841462 containerd[2012]: 2025-12-12 17:28:00.829 [INFO][5010] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7" Namespace="kube-system" Pod="coredns-66bc5c9577-cqttw" WorkloadEndpoint="ip--172--31--16--55-k8s-coredns--66bc5c9577--cqttw-eth0" Dec 12 17:28:00.965000 audit[5071]: NETFILTER_CFG table=filter:128 family=2 entries=42 op=nft_register_chain pid=5071 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:28:00.965000 audit[5071]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22008 a0=3 a1=ffffde51fa80 a2=0 a3=ffff841f1fa8 items=0 ppid=4709 pid=5071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:00.975519 containerd[2012]: time="2025-12-12T17:28:00.975440256Z" level=info msg="connecting to shim 9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7" address="unix:///run/containerd/s/05690141b5d787de09b6a00c6ebbaf736a06821fdbee1ac0c611182c44cda5e9" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:28:00.981245 kernel: audit: type=1325 audit(1765560480.965:688): table=filter:128 family=2 entries=42 op=nft_register_chain pid=5071 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:28:00.981605 kernel: audit: type=1300 audit(1765560480.965:688): arch=c00000b7 syscall=211 success=yes exit=22008 a0=3 a1=ffffde51fa80 a2=0 a3=ffff841f1fa8 items=0 ppid=4709 pid=5071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:00.965000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:28:00.988381 kernel: audit: type=1327 audit(1765560480.965:688): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:28:00.993775 systemd-networkd[1585]: cali8bdcb2d9910: Link UP Dec 12 17:28:00.998062 systemd-networkd[1585]: cali8bdcb2d9910: Gained carrier Dec 12 17:28:01.076777 containerd[2012]: 2025-12-12 17:28:00.623 [INFO][5002] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--55-k8s-calico--apiserver--5fcd4b4759--jhpmk-eth0 calico-apiserver-5fcd4b4759- calico-apiserver 146b6197-b092-48f2-948f-08d710a51bd7 889 0 2025-12-12 17:27:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fcd4b4759 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-55 calico-apiserver-5fcd4b4759-jhpmk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8bdcb2d9910 [] [] }} ContainerID="0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26" Namespace="calico-apiserver" Pod="calico-apiserver-5fcd4b4759-jhpmk" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--apiserver--5fcd4b4759--jhpmk-" Dec 12 17:28:01.076777 containerd[2012]: 2025-12-12 17:28:00.623 [INFO][5002] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26" Namespace="calico-apiserver" Pod="calico-apiserver-5fcd4b4759-jhpmk" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--apiserver--5fcd4b4759--jhpmk-eth0" Dec 12 17:28:01.076777 containerd[2012]: 2025-12-12 17:28:00.749 [INFO][5038] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26" HandleID="k8s-pod-network.0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26" Workload="ip--172--31--16--55-k8s-calico--apiserver--5fcd4b4759--jhpmk-eth0" Dec 12 17:28:01.076777 containerd[2012]: 2025-12-12 17:28:00.750 [INFO][5038] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26" HandleID="k8s-pod-network.0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26" Workload="ip--172--31--16--55-k8s-calico--apiserver--5fcd4b4759--jhpmk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb200), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-55", "pod":"calico-apiserver-5fcd4b4759-jhpmk", "timestamp":"2025-12-12 17:28:00.749784442 +0000 UTC"}, Hostname:"ip-172-31-16-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:28:01.076777 containerd[2012]: 2025-12-12 17:28:00.750 [INFO][5038] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:28:01.076777 containerd[2012]: 2025-12-12 17:28:00.790 [INFO][5038] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:28:01.076777 containerd[2012]: 2025-12-12 17:28:00.791 [INFO][5038] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-55' Dec 12 17:28:01.076777 containerd[2012]: 2025-12-12 17:28:00.846 [INFO][5038] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26" host="ip-172-31-16-55" Dec 12 17:28:01.076777 containerd[2012]: 2025-12-12 17:28:00.866 [INFO][5038] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-55" Dec 12 17:28:01.076777 containerd[2012]: 2025-12-12 17:28:00.884 [INFO][5038] ipam/ipam.go 511: Trying affinity for 192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:28:01.076777 containerd[2012]: 2025-12-12 17:28:00.889 [INFO][5038] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:28:01.076777 containerd[2012]: 2025-12-12 17:28:00.903 [INFO][5038] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:28:01.076777 containerd[2012]: 2025-12-12 17:28:00.905 [INFO][5038] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.110.64/26 handle="k8s-pod-network.0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26" host="ip-172-31-16-55" Dec 12 17:28:01.076777 containerd[2012]: 2025-12-12 17:28:00.925 [INFO][5038] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26 Dec 12 17:28:01.076777 containerd[2012]: 2025-12-12 17:28:00.943 [INFO][5038] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.110.64/26 handle="k8s-pod-network.0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26" host="ip-172-31-16-55" Dec 12 17:28:01.076777 containerd[2012]: 2025-12-12 17:28:00.966 [INFO][5038] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.110.68/26] block=192.168.110.64/26 handle="k8s-pod-network.0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26" host="ip-172-31-16-55" Dec 12 17:28:01.076777 containerd[2012]: 2025-12-12 17:28:00.966 [INFO][5038] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.68/26] handle="k8s-pod-network.0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26" host="ip-172-31-16-55" Dec 12 17:28:01.076777 containerd[2012]: 2025-12-12 17:28:00.971 [INFO][5038] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:28:01.076777 containerd[2012]: 2025-12-12 17:28:00.971 [INFO][5038] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.110.68/26] IPv6=[] ContainerID="0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26" HandleID="k8s-pod-network.0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26" Workload="ip--172--31--16--55-k8s-calico--apiserver--5fcd4b4759--jhpmk-eth0" Dec 12 17:28:01.079672 containerd[2012]: 2025-12-12 17:28:00.982 [INFO][5002] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26" Namespace="calico-apiserver" Pod="calico-apiserver-5fcd4b4759-jhpmk" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--apiserver--5fcd4b4759--jhpmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--55-k8s-calico--apiserver--5fcd4b4759--jhpmk-eth0", GenerateName:"calico-apiserver-5fcd4b4759-", Namespace:"calico-apiserver", SelfLink:"", UID:"146b6197-b092-48f2-948f-08d710a51bd7", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcd4b4759", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-55", ContainerID:"", Pod:"calico-apiserver-5fcd4b4759-jhpmk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8bdcb2d9910", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:01.079672 containerd[2012]: 2025-12-12 17:28:00.982 [INFO][5002] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.68/32] ContainerID="0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26" Namespace="calico-apiserver" Pod="calico-apiserver-5fcd4b4759-jhpmk" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--apiserver--5fcd4b4759--jhpmk-eth0" Dec 12 17:28:01.079672 containerd[2012]: 2025-12-12 17:28:00.982 [INFO][5002] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8bdcb2d9910 ContainerID="0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26" Namespace="calico-apiserver" Pod="calico-apiserver-5fcd4b4759-jhpmk" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--apiserver--5fcd4b4759--jhpmk-eth0" Dec 12 17:28:01.079672 containerd[2012]: 2025-12-12 17:28:00.994 [INFO][5002] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26" Namespace="calico-apiserver" Pod="calico-apiserver-5fcd4b4759-jhpmk" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--apiserver--5fcd4b4759--jhpmk-eth0" Dec 12 17:28:01.079672 containerd[2012]: 2025-12-12 17:28:00.995 [INFO][5002] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26" Namespace="calico-apiserver" Pod="calico-apiserver-5fcd4b4759-jhpmk" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--apiserver--5fcd4b4759--jhpmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--55-k8s-calico--apiserver--5fcd4b4759--jhpmk-eth0", GenerateName:"calico-apiserver-5fcd4b4759-", Namespace:"calico-apiserver", SelfLink:"", UID:"146b6197-b092-48f2-948f-08d710a51bd7", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcd4b4759", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-55", ContainerID:"0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26", Pod:"calico-apiserver-5fcd4b4759-jhpmk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8bdcb2d9910", MAC:"b2:78:ed:43:c4:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:01.079672 containerd[2012]: 2025-12-12 17:28:01.072 [INFO][5002] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26" Namespace="calico-apiserver" Pod="calico-apiserver-5fcd4b4759-jhpmk" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--apiserver--5fcd4b4759--jhpmk-eth0" Dec 12 17:28:01.087000 audit[5098]: NETFILTER_CFG table=filter:129 family=2 entries=17 op=nft_register_rule pid=5098 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:01.093083 kernel: audit: type=1325 audit(1765560481.087:689): table=filter:129 family=2 entries=17 op=nft_register_rule pid=5098 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:01.087000 audit[5098]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcb86e2a0 a2=0 a3=1 items=0 ppid=3625 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.087000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:01.101000 audit[5098]: NETFILTER_CFG table=nat:130 family=2 entries=35 op=nft_register_chain pid=5098 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:01.101000 audit[5098]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffcb86e2a0 a2=0 a3=1 items=0 ppid=3625 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.101000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:01.141837 systemd[1]: Started cri-containerd-9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7.scope - libcontainer container 9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7. Dec 12 17:28:01.214587 containerd[2012]: time="2025-12-12T17:28:01.214227837Z" level=info msg="connecting to shim 0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26" address="unix:///run/containerd/s/ae6e08dd01700578f90d6c348aa204df47d3a9a853930d9ac8a65ebc7e8c2db1" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:28:01.219000 audit: BPF prog-id=228 op=LOAD Dec 12 17:28:01.228000 audit: BPF prog-id=229 op=LOAD Dec 12 17:28:01.228000 audit[5090]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=5076 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939313336373561396430633864366438653764343865633331363865 Dec 12 17:28:01.232275 systemd-networkd[1585]: cali8bc4ea1469c: Link UP Dec 12 17:28:01.231000 audit: BPF prog-id=229 op=UNLOAD Dec 12 17:28:01.231000 audit[5090]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5076 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.231000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939313336373561396430633864366438653764343865633331363865 Dec 12 17:28:01.236000 audit: BPF prog-id=230 op=LOAD Dec 12 17:28:01.236000 audit[5090]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=5076 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.236000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939313336373561396430633864366438653764343865633331363865 Dec 12 17:28:01.235000 audit[5124]: NETFILTER_CFG table=filter:131 family=2 entries=60 op=nft_register_chain pid=5124 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:28:01.235000 audit[5124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=32232 a0=3 a1=fffffd5a6800 a2=0 a3=ffffb40dbfa8 items=0 ppid=4709 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.235000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:28:01.238740 systemd-networkd[1585]: cali8bc4ea1469c: Gained carrier Dec 12 17:28:01.241000 audit: BPF prog-id=231 op=LOAD Dec 12 17:28:01.241000 audit[5090]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=5076 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.241000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939313336373561396430633864366438653764343865633331363865 Dec 12 17:28:01.244000 audit: BPF prog-id=231 op=UNLOAD Dec 12 17:28:01.244000 audit[5090]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5076 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939313336373561396430633864366438653764343865633331363865 Dec 12 17:28:01.245000 audit: BPF prog-id=230 op=UNLOAD Dec 12 17:28:01.245000 audit[5090]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5076 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.245000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939313336373561396430633864366438653764343865633331363865 Dec 12 17:28:01.246000 audit: BPF prog-id=232 op=LOAD Dec 12 17:28:01.246000 audit[5090]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=5076 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.246000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939313336373561396430633864366438653764343865633331363865 Dec 12 17:28:01.321447 containerd[2012]: 2025-12-12 17:28:00.633 [INFO][5017] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--bklww-eth0 calico-apiserver-6b6bc6ffc4- calico-apiserver 334ae9c1-5a14-4018-8c8f-986d294ed109 882 0 2025-12-12 17:27:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b6bc6ffc4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-55 calico-apiserver-6b6bc6ffc4-bklww eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8bc4ea1469c [] [] }} ContainerID="bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bc6ffc4-bklww" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--bklww-" Dec 12 17:28:01.321447 containerd[2012]: 2025-12-12 17:28:00.634 [INFO][5017] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bc6ffc4-bklww" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--bklww-eth0" Dec 12 17:28:01.321447 containerd[2012]: 2025-12-12 17:28:00.768 [INFO][5045] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b" HandleID="k8s-pod-network.bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b" Workload="ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--bklww-eth0" Dec 12 17:28:01.321447 containerd[2012]: 2025-12-12 17:28:00.768 [INFO][5045] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b" HandleID="k8s-pod-network.bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b" Workload="ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--bklww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000315ee0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-55", "pod":"calico-apiserver-6b6bc6ffc4-bklww", "timestamp":"2025-12-12 17:28:00.768612539 +0000 UTC"}, Hostname:"ip-172-31-16-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:28:01.321447 containerd[2012]: 2025-12-12 17:28:00.768 [INFO][5045] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:28:01.321447 containerd[2012]: 2025-12-12 17:28:00.966 [INFO][5045] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:28:01.321447 containerd[2012]: 2025-12-12 17:28:00.966 [INFO][5045] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-55' Dec 12 17:28:01.321447 containerd[2012]: 2025-12-12 17:28:01.032 [INFO][5045] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b" host="ip-172-31-16-55" Dec 12 17:28:01.321447 containerd[2012]: 2025-12-12 17:28:01.070 [INFO][5045] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-55" Dec 12 17:28:01.321447 containerd[2012]: 2025-12-12 17:28:01.100 [INFO][5045] ipam/ipam.go 511: Trying affinity for 192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:28:01.321447 containerd[2012]: 2025-12-12 17:28:01.112 [INFO][5045] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:28:01.321447 containerd[2012]: 2025-12-12 17:28:01.136 [INFO][5045] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:28:01.321447 containerd[2012]: 2025-12-12 17:28:01.136 [INFO][5045] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.110.64/26 handle="k8s-pod-network.bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b" host="ip-172-31-16-55" Dec 12 17:28:01.321447 containerd[2012]: 2025-12-12 17:28:01.144 [INFO][5045] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b Dec 12 17:28:01.321447 containerd[2012]: 2025-12-12 17:28:01.164 [INFO][5045] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.110.64/26 handle="k8s-pod-network.bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b" host="ip-172-31-16-55" Dec 12 17:28:01.321447 containerd[2012]: 2025-12-12 17:28:01.197 [INFO][5045] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.110.69/26] block=192.168.110.64/26 handle="k8s-pod-network.bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b" host="ip-172-31-16-55" Dec 12 17:28:01.321447 containerd[2012]: 2025-12-12 17:28:01.198 [INFO][5045] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.69/26] handle="k8s-pod-network.bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b" host="ip-172-31-16-55" Dec 12 17:28:01.321447 containerd[2012]: 2025-12-12 17:28:01.199 [INFO][5045] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:28:01.321447 containerd[2012]: 2025-12-12 17:28:01.200 [INFO][5045] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.110.69/26] IPv6=[] ContainerID="bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b" HandleID="k8s-pod-network.bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b" Workload="ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--bklww-eth0" Dec 12 17:28:01.322556 containerd[2012]: 2025-12-12 17:28:01.206 [INFO][5017] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bc6ffc4-bklww" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--bklww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--bklww-eth0", GenerateName:"calico-apiserver-6b6bc6ffc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"334ae9c1-5a14-4018-8c8f-986d294ed109", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b6bc6ffc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-55", ContainerID:"", Pod:"calico-apiserver-6b6bc6ffc4-bklww", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8bc4ea1469c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:01.322556 containerd[2012]: 2025-12-12 17:28:01.206 [INFO][5017] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.69/32] ContainerID="bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bc6ffc4-bklww" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--bklww-eth0" Dec 12 17:28:01.322556 containerd[2012]: 2025-12-12 17:28:01.208 [INFO][5017] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8bc4ea1469c ContainerID="bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bc6ffc4-bklww" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--bklww-eth0" Dec 12 17:28:01.322556 containerd[2012]: 2025-12-12 17:28:01.247 [INFO][5017] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bc6ffc4-bklww" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--bklww-eth0" Dec 12 17:28:01.322556 containerd[2012]: 2025-12-12 17:28:01.250 [INFO][5017] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bc6ffc4-bklww" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--bklww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--bklww-eth0", GenerateName:"calico-apiserver-6b6bc6ffc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"334ae9c1-5a14-4018-8c8f-986d294ed109", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b6bc6ffc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-55", ContainerID:"bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b", Pod:"calico-apiserver-6b6bc6ffc4-bklww", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8bc4ea1469c", MAC:"72:89:fa:11:a3:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:01.322556 containerd[2012]: 2025-12-12 17:28:01.286 [INFO][5017] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bc6ffc4-bklww" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--bklww-eth0" Dec 12 17:28:01.346410 systemd[1]: Started cri-containerd-0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26.scope - libcontainer container 0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26. Dec 12 17:28:01.384000 audit[5161]: NETFILTER_CFG table=filter:132 family=2 entries=41 op=nft_register_chain pid=5161 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:28:01.384000 audit[5161]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23096 a0=3 a1=ffffd316c1b0 a2=0 a3=ffffb1054fa8 items=0 ppid=4709 pid=5161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.384000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:28:01.419398 containerd[2012]: time="2025-12-12T17:28:01.417782986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cqttw,Uid:da236b25-6353-4685-ace8-c9064e3e4481,Namespace:kube-system,Attempt:0,} returns sandbox id \"9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7\"" Dec 12 17:28:01.423735 containerd[2012]: time="2025-12-12T17:28:01.423360454Z" level=info msg="connecting to shim bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b" address="unix:///run/containerd/s/240218d6faa7c5dc979fd28d5adad462256afca139b610b53abb7f37b21206d5" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:28:01.434000 audit: BPF prog-id=233 op=LOAD Dec 12 17:28:01.439195 containerd[2012]: time="2025-12-12T17:28:01.439142710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b7498b7d9-pzhlv,Uid:baecc54d-00d4-4fc9-9061-9d5f893dca48,Namespace:calico-system,Attempt:0,}" Dec 12 17:28:01.442000 audit: BPF prog-id=234 op=LOAD Dec 12 17:28:01.442000 audit[5140]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5126 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031323935313266633263626464646531613737396664653763346530 Dec 12 17:28:01.458000 audit: BPF prog-id=234 op=UNLOAD Dec 12 17:28:01.458000 audit[5140]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5126 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031323935313266633263626464646531613737396664653763346530 Dec 12 17:28:01.459000 audit: BPF prog-id=235 op=LOAD Dec 12 17:28:01.459000 audit[5140]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5126 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031323935313266633263626464646531613737396664653763346530 Dec 12 17:28:01.460000 audit: BPF prog-id=236 op=LOAD Dec 12 17:28:01.460000 audit[5140]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5126 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.460000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031323935313266633263626464646531613737396664653763346530 Dec 12 17:28:01.460000 audit: BPF prog-id=236 op=UNLOAD Dec 12 17:28:01.460000 audit[5140]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5126 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.460000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031323935313266633263626464646531613737396664653763346530 Dec 12 17:28:01.461000 audit: BPF prog-id=235 op=UNLOAD Dec 12 17:28:01.461000 audit[5140]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5126 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031323935313266633263626464646531613737396664653763346530 Dec 12 17:28:01.462000 audit: BPF prog-id=237 op=LOAD Dec 12 17:28:01.462000 audit[5140]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5126 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.462000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031323935313266633263626464646531613737396664653763346530 Dec 12 17:28:01.467187 containerd[2012]: time="2025-12-12T17:28:01.465790966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bc6ffc4-ln8dl,Uid:df70d9f4-d51b-472c-b1f4-6f65f02c50aa,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:28:01.467187 containerd[2012]: time="2025-12-12T17:28:01.466010626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hb6pw,Uid:8797b6d6-7a5e-4865-91c2-2bd3d90f57cf,Namespace:calico-system,Attempt:0,}" Dec 12 17:28:01.507025 containerd[2012]: time="2025-12-12T17:28:01.506154730Z" level=info msg="CreateContainer within sandbox \"9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:28:01.649511 systemd[1]: Started cri-containerd-bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b.scope - libcontainer container bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b. Dec 12 17:28:01.655404 containerd[2012]: time="2025-12-12T17:28:01.655281371Z" level=info msg="Container 4fa945e10dbaf038b482758892512d2b8c1ba526931c1b83b37d353bced6d0df: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:28:01.687332 containerd[2012]: time="2025-12-12T17:28:01.687245243Z" level=info msg="CreateContainer within sandbox \"9913675a9d0c8d6d8e7d48ec3168ed7f58f0828a4c1f8904a1cbc152011409d7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4fa945e10dbaf038b482758892512d2b8c1ba526931c1b83b37d353bced6d0df\"" Dec 12 17:28:01.688493 containerd[2012]: time="2025-12-12T17:28:01.688051655Z" level=info msg="StartContainer for \"4fa945e10dbaf038b482758892512d2b8c1ba526931c1b83b37d353bced6d0df\"" Dec 12 17:28:01.703951 containerd[2012]: time="2025-12-12T17:28:01.703775003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcd4b4759-jhpmk,Uid:146b6197-b092-48f2-948f-08d710a51bd7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0129512fc2cbddde1a779fde7c4e04b8db3e259a71c9823ffffef344bb296a26\"" Dec 12 17:28:01.709873 containerd[2012]: time="2025-12-12T17:28:01.709795415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:28:01.711659 containerd[2012]: time="2025-12-12T17:28:01.711553739Z" level=info msg="connecting to shim 4fa945e10dbaf038b482758892512d2b8c1ba526931c1b83b37d353bced6d0df" address="unix:///run/containerd/s/05690141b5d787de09b6a00c6ebbaf736a06821fdbee1ac0c611182c44cda5e9" protocol=ttrpc version=3 Dec 12 17:28:01.746000 audit: BPF prog-id=238 op=LOAD Dec 12 17:28:01.751000 audit: BPF prog-id=239 op=LOAD Dec 12 17:28:01.751000 audit[5194]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5183 pid=5194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262616565386232636438616262613566396461323835363361363530 Dec 12 17:28:01.751000 audit: BPF prog-id=239 op=UNLOAD Dec 12 17:28:01.751000 audit[5194]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5183 pid=5194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262616565386232636438616262613566396461323835363361363530 Dec 12 17:28:01.753000 audit: BPF prog-id=240 op=LOAD Dec 12 17:28:01.753000 audit[5194]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5183 pid=5194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.753000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262616565386232636438616262613566396461323835363361363530 Dec 12 17:28:01.753000 audit: BPF prog-id=241 op=LOAD Dec 12 17:28:01.753000 audit[5194]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=5183 pid=5194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.753000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262616565386232636438616262613566396461323835363361363530 Dec 12 17:28:01.753000 audit: BPF prog-id=241 op=UNLOAD Dec 12 17:28:01.753000 audit[5194]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5183 pid=5194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.753000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262616565386232636438616262613566396461323835363361363530 Dec 12 17:28:01.759000 audit: BPF prog-id=240 op=UNLOAD Dec 12 17:28:01.759000 audit[5194]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5183 pid=5194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262616565386232636438616262613566396461323835363361363530 Dec 12 17:28:01.759000 audit: BPF prog-id=242 op=LOAD Dec 12 17:28:01.759000 audit[5194]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=5183 pid=5194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262616565386232636438616262613566396461323835363361363530 Dec 12 17:28:01.826549 systemd[1]: Started cri-containerd-4fa945e10dbaf038b482758892512d2b8c1ba526931c1b83b37d353bced6d0df.scope - libcontainer container 4fa945e10dbaf038b482758892512d2b8c1ba526931c1b83b37d353bced6d0df. Dec 12 17:28:01.903000 audit: BPF prog-id=243 op=LOAD Dec 12 17:28:01.908000 audit: BPF prog-id=244 op=LOAD Dec 12 17:28:01.908000 audit[5254]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5076 pid=5254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466613934356531306462616630333862343832373538383932353132 Dec 12 17:28:01.908000 audit: BPF prog-id=244 op=UNLOAD Dec 12 17:28:01.908000 audit[5254]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5076 pid=5254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466613934356531306462616630333862343832373538383932353132 Dec 12 17:28:01.911000 audit: BPF prog-id=245 op=LOAD Dec 12 17:28:01.911000 audit[5254]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5076 pid=5254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466613934356531306462616630333862343832373538383932353132 Dec 12 17:28:01.911000 audit: BPF prog-id=246 op=LOAD Dec 12 17:28:01.911000 audit[5254]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=5076 pid=5254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466613934356531306462616630333862343832373538383932353132 Dec 12 17:28:01.913000 audit: BPF prog-id=246 op=UNLOAD Dec 12 17:28:01.913000 audit[5254]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5076 pid=5254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466613934356531306462616630333862343832373538383932353132 Dec 12 17:28:01.913000 audit: BPF prog-id=245 op=UNLOAD Dec 12 17:28:01.913000 audit[5254]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5076 pid=5254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466613934356531306462616630333862343832373538383932353132 Dec 12 17:28:01.914000 audit: BPF prog-id=247 op=LOAD Dec 12 17:28:01.914000 audit[5254]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=5076 pid=5254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:01.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466613934356531306462616630333862343832373538383932353132 Dec 12 17:28:01.997545 containerd[2012]: time="2025-12-12T17:28:01.997041673Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:01.999672 containerd[2012]: time="2025-12-12T17:28:01.999527221Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:28:01.999839 containerd[2012]: time="2025-12-12T17:28:01.999686389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:02.000070 kubelet[3325]: E1212 17:28:02.000004 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:02.002220 kubelet[3325]: E1212 17:28:02.000210 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:02.002220 kubelet[3325]: E1212 17:28:02.000437 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5fcd4b4759-jhpmk_calico-apiserver(146b6197-b092-48f2-948f-08d710a51bd7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:02.002220 kubelet[3325]: E1212 17:28:02.000498 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcd4b4759-jhpmk" podUID="146b6197-b092-48f2-948f-08d710a51bd7" Dec 12 17:28:02.074437 containerd[2012]: time="2025-12-12T17:28:02.074251293Z" level=info msg="StartContainer for \"4fa945e10dbaf038b482758892512d2b8c1ba526931c1b83b37d353bced6d0df\" returns successfully" Dec 12 17:28:02.080080 containerd[2012]: time="2025-12-12T17:28:02.079960149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bc6ffc4-bklww,Uid:334ae9c1-5a14-4018-8c8f-986d294ed109,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"bbaee8b2cd8abba5f9da28563a650752a71a714b495517f5f8168dea8ffed25b\"" Dec 12 17:28:02.091072 containerd[2012]: time="2025-12-12T17:28:02.091006245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:28:02.178070 systemd-networkd[1585]: califa7880f10c5: Gained IPv6LL Dec 12 17:28:02.278330 systemd-networkd[1585]: cali7570c05dca1: Link UP Dec 12 17:28:02.280567 systemd-networkd[1585]: cali7570c05dca1: Gained carrier Dec 12 17:28:02.323312 containerd[2012]: 2025-12-12 17:28:01.891 [INFO][5216] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--ln8dl-eth0 calico-apiserver-6b6bc6ffc4- calico-apiserver df70d9f4-d51b-472c-b1f4-6f65f02c50aa 888 0 2025-12-12 17:27:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b6bc6ffc4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-55 calico-apiserver-6b6bc6ffc4-ln8dl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7570c05dca1 [] [] }} ContainerID="7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bc6ffc4-ln8dl" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--ln8dl-" Dec 12 17:28:02.323312 containerd[2012]: 2025-12-12 17:28:01.891 [INFO][5216] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bc6ffc4-ln8dl" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--ln8dl-eth0" Dec 12 17:28:02.323312 containerd[2012]: 2025-12-12 17:28:02.127 [INFO][5284] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9" HandleID="k8s-pod-network.7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9" Workload="ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--ln8dl-eth0" Dec 12 17:28:02.323312 containerd[2012]: 2025-12-12 17:28:02.130 [INFO][5284] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9" HandleID="k8s-pod-network.7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9" Workload="ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--ln8dl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000318240), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-55", "pod":"calico-apiserver-6b6bc6ffc4-ln8dl", "timestamp":"2025-12-12 17:28:02.126959073 +0000 UTC"}, Hostname:"ip-172-31-16-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:28:02.323312 containerd[2012]: 2025-12-12 17:28:02.134 [INFO][5284] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:28:02.323312 containerd[2012]: 2025-12-12 17:28:02.134 [INFO][5284] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:28:02.323312 containerd[2012]: 2025-12-12 17:28:02.134 [INFO][5284] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-55' Dec 12 17:28:02.323312 containerd[2012]: 2025-12-12 17:28:02.164 [INFO][5284] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9" host="ip-172-31-16-55" Dec 12 17:28:02.323312 containerd[2012]: 2025-12-12 17:28:02.174 [INFO][5284] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-55" Dec 12 17:28:02.323312 containerd[2012]: 2025-12-12 17:28:02.190 [INFO][5284] ipam/ipam.go 511: Trying affinity for 192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:28:02.323312 containerd[2012]: 2025-12-12 17:28:02.195 [INFO][5284] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:28:02.323312 containerd[2012]: 2025-12-12 17:28:02.217 [INFO][5284] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:28:02.323312 containerd[2012]: 2025-12-12 17:28:02.217 [INFO][5284] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.110.64/26 handle="k8s-pod-network.7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9" host="ip-172-31-16-55" Dec 12 17:28:02.323312 containerd[2012]: 2025-12-12 17:28:02.225 [INFO][5284] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9 Dec 12 17:28:02.323312 containerd[2012]: 2025-12-12 17:28:02.237 [INFO][5284] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.110.64/26 handle="k8s-pod-network.7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9" host="ip-172-31-16-55" Dec 12 17:28:02.323312 containerd[2012]: 2025-12-12 17:28:02.251 [INFO][5284] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.110.70/26] block=192.168.110.64/26 handle="k8s-pod-network.7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9" host="ip-172-31-16-55" Dec 12 17:28:02.323312 containerd[2012]: 2025-12-12 17:28:02.251 [INFO][5284] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.70/26] handle="k8s-pod-network.7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9" host="ip-172-31-16-55" Dec 12 17:28:02.323312 containerd[2012]: 2025-12-12 17:28:02.251 [INFO][5284] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:28:02.323312 containerd[2012]: 2025-12-12 17:28:02.252 [INFO][5284] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.110.70/26] IPv6=[] ContainerID="7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9" HandleID="k8s-pod-network.7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9" Workload="ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--ln8dl-eth0" Dec 12 17:28:02.326026 containerd[2012]: 2025-12-12 17:28:02.261 [INFO][5216] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bc6ffc4-ln8dl" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--ln8dl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--ln8dl-eth0", GenerateName:"calico-apiserver-6b6bc6ffc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"df70d9f4-d51b-472c-b1f4-6f65f02c50aa", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b6bc6ffc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-55", ContainerID:"", Pod:"calico-apiserver-6b6bc6ffc4-ln8dl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7570c05dca1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:02.326026 containerd[2012]: 2025-12-12 17:28:02.263 [INFO][5216] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.70/32] ContainerID="7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bc6ffc4-ln8dl" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--ln8dl-eth0" Dec 12 17:28:02.326026 containerd[2012]: 2025-12-12 17:28:02.263 [INFO][5216] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7570c05dca1 ContainerID="7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bc6ffc4-ln8dl" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--ln8dl-eth0" Dec 12 17:28:02.326026 containerd[2012]: 2025-12-12 17:28:02.279 [INFO][5216] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bc6ffc4-ln8dl" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--ln8dl-eth0" Dec 12 17:28:02.326026 containerd[2012]: 2025-12-12 17:28:02.282 [INFO][5216] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bc6ffc4-ln8dl" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--ln8dl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--ln8dl-eth0", GenerateName:"calico-apiserver-6b6bc6ffc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"df70d9f4-d51b-472c-b1f4-6f65f02c50aa", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b6bc6ffc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-55", ContainerID:"7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9", Pod:"calico-apiserver-6b6bc6ffc4-ln8dl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7570c05dca1", MAC:"1a:6c:e6:43:e3:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:02.326026 containerd[2012]: 2025-12-12 17:28:02.316 [INFO][5216] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bc6ffc4-ln8dl" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--apiserver--6b6bc6ffc4--ln8dl-eth0" Dec 12 17:28:02.372943 containerd[2012]: time="2025-12-12T17:28:02.372538667Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:02.374430 containerd[2012]: time="2025-12-12T17:28:02.374290391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:02.374809 containerd[2012]: time="2025-12-12T17:28:02.374578343Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:28:02.376130 kubelet[3325]: E1212 17:28:02.375932 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:02.376342 kubelet[3325]: E1212 17:28:02.376254 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:02.377241 kubelet[3325]: E1212 17:28:02.377064 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b6bc6ffc4-bklww_calico-apiserver(334ae9c1-5a14-4018-8c8f-986d294ed109): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:02.377241 kubelet[3325]: E1212 17:28:02.377158 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-bklww" podUID="334ae9c1-5a14-4018-8c8f-986d294ed109" Dec 12 17:28:02.456366 systemd-networkd[1585]: calia278cc74aeb: Link UP Dec 12 17:28:02.460771 systemd-networkd[1585]: calia278cc74aeb: Gained carrier Dec 12 17:28:02.510123 containerd[2012]: time="2025-12-12T17:28:02.510047699Z" level=info msg="connecting to shim 7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9" address="unix:///run/containerd/s/93c6d7d1590ed068ed7adeb353518b1e95b61b0e277542f07871424744ceefef" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:28:02.571307 containerd[2012]: 2025-12-12 17:28:01.812 [INFO][5202] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--55-k8s-csi--node--driver--hb6pw-eth0 csi-node-driver- calico-system 8797b6d6-7a5e-4865-91c2-2bd3d90f57cf 782 0 2025-12-12 17:27:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-16-55 csi-node-driver-hb6pw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia278cc74aeb [] [] }} ContainerID="307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20" Namespace="calico-system" Pod="csi-node-driver-hb6pw" WorkloadEndpoint="ip--172--31--16--55-k8s-csi--node--driver--hb6pw-" Dec 12 17:28:02.571307 containerd[2012]: 2025-12-12 17:28:01.818 [INFO][5202] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20" Namespace="calico-system" Pod="csi-node-driver-hb6pw" WorkloadEndpoint="ip--172--31--16--55-k8s-csi--node--driver--hb6pw-eth0" Dec 12 17:28:02.571307 containerd[2012]: 2025-12-12 17:28:02.140 [INFO][5279] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20" HandleID="k8s-pod-network.307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20" Workload="ip--172--31--16--55-k8s-csi--node--driver--hb6pw-eth0" Dec 12 17:28:02.571307 containerd[2012]: 2025-12-12 17:28:02.140 [INFO][5279] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20" HandleID="k8s-pod-network.307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20" Workload="ip--172--31--16--55-k8s-csi--node--driver--hb6pw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032cf10), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-55", "pod":"csi-node-driver-hb6pw", "timestamp":"2025-12-12 17:28:02.139948785 +0000 UTC"}, Hostname:"ip-172-31-16-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:28:02.571307 containerd[2012]: 2025-12-12 17:28:02.141 [INFO][5279] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:28:02.571307 containerd[2012]: 2025-12-12 17:28:02.251 [INFO][5279] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:28:02.571307 containerd[2012]: 2025-12-12 17:28:02.252 [INFO][5279] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-55' Dec 12 17:28:02.571307 containerd[2012]: 2025-12-12 17:28:02.287 [INFO][5279] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20" host="ip-172-31-16-55" Dec 12 17:28:02.571307 containerd[2012]: 2025-12-12 17:28:02.327 [INFO][5279] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-55" Dec 12 17:28:02.571307 containerd[2012]: 2025-12-12 17:28:02.343 [INFO][5279] ipam/ipam.go 511: Trying affinity for 192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:28:02.571307 containerd[2012]: 2025-12-12 17:28:02.350 [INFO][5279] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:28:02.571307 containerd[2012]: 2025-12-12 17:28:02.359 [INFO][5279] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:28:02.571307 containerd[2012]: 2025-12-12 17:28:02.359 [INFO][5279] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.110.64/26 handle="k8s-pod-network.307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20" host="ip-172-31-16-55" Dec 12 17:28:02.571307 containerd[2012]: 2025-12-12 17:28:02.364 [INFO][5279] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20 Dec 12 17:28:02.571307 containerd[2012]: 2025-12-12 17:28:02.383 [INFO][5279] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.110.64/26 handle="k8s-pod-network.307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20" host="ip-172-31-16-55" Dec 12 17:28:02.571307 containerd[2012]: 2025-12-12 17:28:02.415 [INFO][5279] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.110.71/26] block=192.168.110.64/26 handle="k8s-pod-network.307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20" host="ip-172-31-16-55" Dec 12 17:28:02.571307 containerd[2012]: 2025-12-12 17:28:02.416 [INFO][5279] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.71/26] handle="k8s-pod-network.307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20" host="ip-172-31-16-55" Dec 12 17:28:02.571307 containerd[2012]: 2025-12-12 17:28:02.417 [INFO][5279] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:28:02.571307 containerd[2012]: 2025-12-12 17:28:02.417 [INFO][5279] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.110.71/26] IPv6=[] ContainerID="307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20" HandleID="k8s-pod-network.307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20" Workload="ip--172--31--16--55-k8s-csi--node--driver--hb6pw-eth0" Dec 12 17:28:02.572522 containerd[2012]: 2025-12-12 17:28:02.431 [INFO][5202] cni-plugin/k8s.go 418: Populated endpoint ContainerID="307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20" Namespace="calico-system" Pod="csi-node-driver-hb6pw" WorkloadEndpoint="ip--172--31--16--55-k8s-csi--node--driver--hb6pw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--55-k8s-csi--node--driver--hb6pw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8797b6d6-7a5e-4865-91c2-2bd3d90f57cf", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-55", ContainerID:"", Pod:"csi-node-driver-hb6pw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia278cc74aeb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:02.572522 containerd[2012]: 2025-12-12 17:28:02.431 [INFO][5202] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.71/32] ContainerID="307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20" Namespace="calico-system" Pod="csi-node-driver-hb6pw" WorkloadEndpoint="ip--172--31--16--55-k8s-csi--node--driver--hb6pw-eth0" Dec 12 17:28:02.572522 containerd[2012]: 2025-12-12 17:28:02.431 [INFO][5202] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia278cc74aeb ContainerID="307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20" Namespace="calico-system" Pod="csi-node-driver-hb6pw" WorkloadEndpoint="ip--172--31--16--55-k8s-csi--node--driver--hb6pw-eth0" Dec 12 17:28:02.572522 containerd[2012]: 2025-12-12 17:28:02.463 [INFO][5202] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20" Namespace="calico-system" Pod="csi-node-driver-hb6pw" WorkloadEndpoint="ip--172--31--16--55-k8s-csi--node--driver--hb6pw-eth0" Dec 12 17:28:02.572522 containerd[2012]: 2025-12-12 17:28:02.491 [INFO][5202] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20" Namespace="calico-system" Pod="csi-node-driver-hb6pw" WorkloadEndpoint="ip--172--31--16--55-k8s-csi--node--driver--hb6pw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--55-k8s-csi--node--driver--hb6pw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8797b6d6-7a5e-4865-91c2-2bd3d90f57cf", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-55", ContainerID:"307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20", Pod:"csi-node-driver-hb6pw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia278cc74aeb", MAC:"4e:3b:93:0f:55:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:02.572522 containerd[2012]: 2025-12-12 17:28:02.558 [INFO][5202] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20" Namespace="calico-system" Pod="csi-node-driver-hb6pw" WorkloadEndpoint="ip--172--31--16--55-k8s-csi--node--driver--hb6pw-eth0" Dec 12 17:28:02.625625 systemd[1]: Started cri-containerd-7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9.scope - libcontainer container 7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9. Dec 12 17:28:02.693063 systemd-networkd[1585]: cali8bc4ea1469c: Gained IPv6LL Dec 12 17:28:02.685000 audit[5368]: NETFILTER_CFG table=filter:133 family=2 entries=41 op=nft_register_chain pid=5368 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:28:02.685000 audit[5368]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23044 a0=3 a1=ffffe0390e40 a2=0 a3=ffff9d862fa8 items=0 ppid=4709 pid=5368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:02.685000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:28:02.702018 systemd-networkd[1585]: califa9404f72d7: Link UP Dec 12 17:28:02.706184 systemd-networkd[1585]: califa9404f72d7: Gained carrier Dec 12 17:28:02.728440 containerd[2012]: time="2025-12-12T17:28:02.728252388Z" level=info msg="connecting to shim 307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20" address="unix:///run/containerd/s/eb7497134b6058a8e02e4dd16961a6af52c1a45a1c7451ecc885438dd741e320" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:28:02.781072 containerd[2012]: 2025-12-12 17:28:01.937 [INFO][5200] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--55-k8s-calico--kube--controllers--6b7498b7d9--pzhlv-eth0 calico-kube-controllers-6b7498b7d9- calico-system baecc54d-00d4-4fc9-9061-9d5f893dca48 885 0 2025-12-12 17:27:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b7498b7d9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-16-55 calico-kube-controllers-6b7498b7d9-pzhlv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califa9404f72d7 [] [] }} ContainerID="88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d" Namespace="calico-system" Pod="calico-kube-controllers-6b7498b7d9-pzhlv" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--kube--controllers--6b7498b7d9--pzhlv-" Dec 12 17:28:02.781072 containerd[2012]: 2025-12-12 17:28:01.938 [INFO][5200] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d" Namespace="calico-system" Pod="calico-kube-controllers-6b7498b7d9-pzhlv" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--kube--controllers--6b7498b7d9--pzhlv-eth0" Dec 12 17:28:02.781072 containerd[2012]: 2025-12-12 17:28:02.148 [INFO][5290] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d" HandleID="k8s-pod-network.88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d" Workload="ip--172--31--16--55-k8s-calico--kube--controllers--6b7498b7d9--pzhlv-eth0" Dec 12 17:28:02.781072 containerd[2012]: 2025-12-12 17:28:02.150 [INFO][5290] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d" HandleID="k8s-pod-network.88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d" Workload="ip--172--31--16--55-k8s-calico--kube--controllers--6b7498b7d9--pzhlv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031f470), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-55", "pod":"calico-kube-controllers-6b7498b7d9-pzhlv", "timestamp":"2025-12-12 17:28:02.148605093 +0000 UTC"}, Hostname:"ip-172-31-16-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:28:02.781072 containerd[2012]: 2025-12-12 17:28:02.150 [INFO][5290] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:28:02.781072 containerd[2012]: 2025-12-12 17:28:02.417 [INFO][5290] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:28:02.781072 containerd[2012]: 2025-12-12 17:28:02.418 [INFO][5290] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-55' Dec 12 17:28:02.781072 containerd[2012]: 2025-12-12 17:28:02.492 [INFO][5290] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d" host="ip-172-31-16-55" Dec 12 17:28:02.781072 containerd[2012]: 2025-12-12 17:28:02.545 [INFO][5290] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-55" Dec 12 17:28:02.781072 containerd[2012]: 2025-12-12 17:28:02.573 [INFO][5290] ipam/ipam.go 511: Trying affinity for 192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:28:02.781072 containerd[2012]: 2025-12-12 17:28:02.581 [INFO][5290] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:28:02.781072 containerd[2012]: 2025-12-12 17:28:02.594 [INFO][5290] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:28:02.781072 containerd[2012]: 2025-12-12 17:28:02.595 [INFO][5290] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.110.64/26 handle="k8s-pod-network.88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d" host="ip-172-31-16-55" Dec 12 17:28:02.781072 containerd[2012]: 2025-12-12 17:28:02.605 [INFO][5290] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d Dec 12 17:28:02.781072 containerd[2012]: 2025-12-12 17:28:02.635 [INFO][5290] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.110.64/26 handle="k8s-pod-network.88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d" host="ip-172-31-16-55" Dec 12 17:28:02.781072 containerd[2012]: 2025-12-12 17:28:02.671 [INFO][5290] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.110.72/26] block=192.168.110.64/26 handle="k8s-pod-network.88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d" host="ip-172-31-16-55" Dec 12 17:28:02.781072 containerd[2012]: 2025-12-12 17:28:02.671 [INFO][5290] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.72/26] handle="k8s-pod-network.88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d" host="ip-172-31-16-55" Dec 12 17:28:02.781072 containerd[2012]: 2025-12-12 17:28:02.671 [INFO][5290] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:28:02.781072 containerd[2012]: 2025-12-12 17:28:02.671 [INFO][5290] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.110.72/26] IPv6=[] ContainerID="88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d" HandleID="k8s-pod-network.88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d" Workload="ip--172--31--16--55-k8s-calico--kube--controllers--6b7498b7d9--pzhlv-eth0" Dec 12 17:28:02.784626 containerd[2012]: 2025-12-12 17:28:02.683 [INFO][5200] cni-plugin/k8s.go 418: Populated endpoint ContainerID="88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d" Namespace="calico-system" Pod="calico-kube-controllers-6b7498b7d9-pzhlv" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--kube--controllers--6b7498b7d9--pzhlv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--55-k8s-calico--kube--controllers--6b7498b7d9--pzhlv-eth0", GenerateName:"calico-kube-controllers-6b7498b7d9-", Namespace:"calico-system", SelfLink:"", UID:"baecc54d-00d4-4fc9-9061-9d5f893dca48", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b7498b7d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-55", ContainerID:"", Pod:"calico-kube-controllers-6b7498b7d9-pzhlv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.110.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califa9404f72d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:02.784626 containerd[2012]: 2025-12-12 17:28:02.684 [INFO][5200] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.72/32] ContainerID="88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d" Namespace="calico-system" Pod="calico-kube-controllers-6b7498b7d9-pzhlv" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--kube--controllers--6b7498b7d9--pzhlv-eth0" Dec 12 17:28:02.784626 containerd[2012]: 2025-12-12 17:28:02.684 [INFO][5200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa9404f72d7 ContainerID="88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d" Namespace="calico-system" Pod="calico-kube-controllers-6b7498b7d9-pzhlv" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--kube--controllers--6b7498b7d9--pzhlv-eth0" Dec 12 17:28:02.784626 containerd[2012]: 2025-12-12 17:28:02.709 [INFO][5200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d" Namespace="calico-system" Pod="calico-kube-controllers-6b7498b7d9-pzhlv" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--kube--controllers--6b7498b7d9--pzhlv-eth0" Dec 12 17:28:02.784626 containerd[2012]: 2025-12-12 17:28:02.721 [INFO][5200] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d" Namespace="calico-system" Pod="calico-kube-controllers-6b7498b7d9-pzhlv" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--kube--controllers--6b7498b7d9--pzhlv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--55-k8s-calico--kube--controllers--6b7498b7d9--pzhlv-eth0", GenerateName:"calico-kube-controllers-6b7498b7d9-", Namespace:"calico-system", SelfLink:"", UID:"baecc54d-00d4-4fc9-9061-9d5f893dca48", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b7498b7d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-55", ContainerID:"88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d", Pod:"calico-kube-controllers-6b7498b7d9-pzhlv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.110.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califa9404f72d7", MAC:"a6:6b:60:71:70:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:02.784626 containerd[2012]: 2025-12-12 17:28:02.770 [INFO][5200] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d" Namespace="calico-system" Pod="calico-kube-controllers-6b7498b7d9-pzhlv" WorkloadEndpoint="ip--172--31--16--55-k8s-calico--kube--controllers--6b7498b7d9--pzhlv-eth0" Dec 12 17:28:02.824057 systemd[1]: Started cri-containerd-307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20.scope - libcontainer container 307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20. Dec 12 17:28:02.832000 audit: BPF prog-id=248 op=LOAD Dec 12 17:28:02.835000 audit: BPF prog-id=249 op=LOAD Dec 12 17:28:02.835000 audit[5351]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5339 pid=5351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:02.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763633936326339616431383664656331386438643632653133353137 Dec 12 17:28:02.838000 audit: BPF prog-id=249 op=UNLOAD Dec 12 17:28:02.838000 audit[5351]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5339 pid=5351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:02.838000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763633936326339616431383664656331386438643632653133353137 Dec 12 17:28:02.842000 audit: BPF prog-id=250 op=LOAD Dec 12 17:28:02.842000 audit[5351]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5339 pid=5351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:02.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763633936326339616431383664656331386438643632653133353137 Dec 12 17:28:02.844000 audit: BPF prog-id=251 op=LOAD Dec 12 17:28:02.844000 audit[5351]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=5339 pid=5351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:02.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763633936326339616431383664656331386438643632653133353137 Dec 12 17:28:02.850000 audit: BPF prog-id=251 op=UNLOAD Dec 12 17:28:02.850000 audit[5351]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5339 pid=5351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:02.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763633936326339616431383664656331386438643632653133353137 Dec 12 17:28:02.850000 audit: BPF prog-id=250 op=UNLOAD Dec 12 17:28:02.850000 audit[5351]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5339 pid=5351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:02.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763633936326339616431383664656331386438643632653133353137 Dec 12 17:28:02.850000 audit: BPF prog-id=252 op=LOAD Dec 12 17:28:02.850000 audit[5351]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=5339 pid=5351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:02.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763633936326339616431383664656331386438643632653133353137 Dec 12 17:28:02.883502 systemd-networkd[1585]: cali8bdcb2d9910: Gained IPv6LL Dec 12 17:28:02.886600 containerd[2012]: time="2025-12-12T17:28:02.884491561Z" level=info msg="connecting to shim 88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d" address="unix:///run/containerd/s/4b7f53dba68d00aec42ab999dad1332e576eaa1d933b297672ae369214a677f9" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:28:02.908000 audit: BPF prog-id=253 op=LOAD Dec 12 17:28:02.910000 audit: BPF prog-id=254 op=LOAD Dec 12 17:28:02.910000 audit[5401]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=5388 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:02.910000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330376332313138666136353531666237656634373932313836666434 Dec 12 17:28:02.911000 audit: BPF prog-id=254 op=UNLOAD Dec 12 17:28:02.911000 audit[5401]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5388 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:02.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330376332313138666136353531666237656634373932313836666434 Dec 12 17:28:02.912000 audit: BPF prog-id=255 op=LOAD Dec 12 17:28:02.912000 audit[5401]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=5388 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:02.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330376332313138666136353531666237656634373932313836666434 Dec 12 17:28:02.913000 audit: BPF prog-id=256 op=LOAD Dec 12 17:28:02.913000 audit[5401]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=5388 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:02.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330376332313138666136353531666237656634373932313836666434 Dec 12 17:28:02.914000 audit: BPF prog-id=256 op=UNLOAD Dec 12 17:28:02.914000 audit[5401]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5388 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:02.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330376332313138666136353531666237656634373932313836666434 Dec 12 17:28:02.914000 audit: BPF prog-id=255 op=UNLOAD Dec 12 17:28:02.914000 audit[5401]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5388 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:02.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330376332313138666136353531666237656634373932313836666434 Dec 12 17:28:02.915000 audit: BPF prog-id=257 op=LOAD Dec 12 17:28:02.915000 audit[5401]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=5388 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:02.915000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330376332313138666136353531666237656634373932313836666434 Dec 12 17:28:02.981375 systemd[1]: Started cri-containerd-88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d.scope - libcontainer container 88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d. Dec 12 17:28:02.994264 containerd[2012]: time="2025-12-12T17:28:02.993743834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hb6pw,Uid:8797b6d6-7a5e-4865-91c2-2bd3d90f57cf,Namespace:calico-system,Attempt:0,} returns sandbox id \"307c2118fa6551fb7ef4792186fd4348815464a56fe430619f1faa34d4d3ec20\"" Dec 12 17:28:03.001878 containerd[2012]: time="2025-12-12T17:28:03.001012738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:28:03.031453 kubelet[3325]: E1212 17:28:03.029828 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcd4b4759-jhpmk" podUID="146b6197-b092-48f2-948f-08d710a51bd7" Dec 12 17:28:03.031453 kubelet[3325]: E1212 17:28:03.031374 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-bklww" podUID="334ae9c1-5a14-4018-8c8f-986d294ed109" Dec 12 17:28:03.003000 audit[5465]: NETFILTER_CFG table=filter:134 family=2 entries=84 op=nft_register_chain pid=5465 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:28:03.003000 audit[5465]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=42656 a0=3 a1=fffff0f57450 a2=0 a3=ffffb21a3fa8 items=0 ppid=4709 pid=5465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:03.003000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:28:03.053671 kubelet[3325]: I1212 17:28:03.053558 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-cqttw" podStartSLOduration=53.05353159 podStartE2EDuration="53.05353159s" podCreationTimestamp="2025-12-12 17:27:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:28:03.050320642 +0000 UTC m=+58.072773290" watchObservedRunningTime="2025-12-12 17:28:03.05353159 +0000 UTC m=+58.075984226" Dec 12 17:28:03.118000 audit: BPF prog-id=258 op=LOAD Dec 12 17:28:03.120000 audit: BPF prog-id=259 op=LOAD Dec 12 17:28:03.120000 audit[5446]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=5434 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:03.120000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838663531633132346430633033656337336331663534376339356431 Dec 12 17:28:03.120000 audit: BPF prog-id=259 op=UNLOAD Dec 12 17:28:03.120000 audit[5446]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5434 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:03.120000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838663531633132346430633033656337336331663534376339356431 Dec 12 17:28:03.121000 audit: BPF prog-id=260 op=LOAD Dec 12 17:28:03.121000 audit[5446]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=5434 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:03.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838663531633132346430633033656337336331663534376339356431 Dec 12 17:28:03.121000 audit: BPF prog-id=261 op=LOAD Dec 12 17:28:03.121000 audit[5446]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=5434 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:03.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838663531633132346430633033656337336331663534376339356431 Dec 12 17:28:03.121000 audit: BPF prog-id=261 op=UNLOAD Dec 12 17:28:03.121000 audit[5446]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5434 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:03.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838663531633132346430633033656337336331663534376339356431 Dec 12 17:28:03.121000 audit: BPF prog-id=260 op=UNLOAD Dec 12 17:28:03.121000 audit[5446]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5434 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:03.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838663531633132346430633033656337336331663534376339356431 Dec 12 17:28:03.122000 audit: BPF prog-id=262 op=LOAD Dec 12 17:28:03.122000 audit[5446]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=5434 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:03.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838663531633132346430633033656337336331663534376339356431 Dec 12 17:28:03.222722 containerd[2012]: time="2025-12-12T17:28:03.222613031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bc6ffc4-ln8dl,Uid:df70d9f4-d51b-472c-b1f4-6f65f02c50aa,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7cc962c9ad186dec18d8d62e1351780ccef0ff98ad7ec13cc78176622179a2c9\"" Dec 12 17:28:03.230000 audit[5474]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=5474 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:03.230000 audit[5474]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe1978440 a2=0 a3=1 items=0 ppid=3625 pid=5474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:03.230000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:03.252000 audit[5474]: NETFILTER_CFG table=nat:136 family=2 entries=44 op=nft_register_rule pid=5474 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:03.252000 audit[5474]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffe1978440 a2=0 a3=1 items=0 ppid=3625 pid=5474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:03.252000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:03.293417 containerd[2012]: time="2025-12-12T17:28:03.293337035Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:03.296346 containerd[2012]: time="2025-12-12T17:28:03.295998347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:03.296346 containerd[2012]: time="2025-12-12T17:28:03.296123615Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:28:03.297081 kubelet[3325]: E1212 17:28:03.296769 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:28:03.297081 kubelet[3325]: E1212 17:28:03.296901 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:28:03.299079 kubelet[3325]: E1212 17:28:03.298130 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-hb6pw_calico-system(8797b6d6-7a5e-4865-91c2-2bd3d90f57cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:03.299252 containerd[2012]: time="2025-12-12T17:28:03.299114855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:28:03.349298 containerd[2012]: time="2025-12-12T17:28:03.349231619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b7498b7d9-pzhlv,Uid:baecc54d-00d4-4fc9-9061-9d5f893dca48,Namespace:calico-system,Attempt:0,} returns sandbox id \"88f51c124d0c03ec73c1f547c95d11a31ebdda0a157ebe004adcc0d9257aad8d\"" Dec 12 17:28:03.443928 containerd[2012]: time="2025-12-12T17:28:03.442608816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-c927k,Uid:87213e25-1478-4b01-ac6c-54452f7f57dd,Namespace:calico-system,Attempt:0,}" Dec 12 17:28:03.461103 systemd-networkd[1585]: cali7570c05dca1: Gained IPv6LL Dec 12 17:28:03.589885 containerd[2012]: time="2025-12-12T17:28:03.589780045Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:03.592279 containerd[2012]: time="2025-12-12T17:28:03.592205545Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:28:03.592995 kubelet[3325]: E1212 17:28:03.592512 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:03.592995 kubelet[3325]: E1212 17:28:03.592576 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:03.592995 kubelet[3325]: E1212 17:28:03.592802 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b6bc6ffc4-ln8dl_calico-apiserver(df70d9f4-d51b-472c-b1f4-6f65f02c50aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:03.595500 containerd[2012]: time="2025-12-12T17:28:03.593234149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:03.595500 containerd[2012]: time="2025-12-12T17:28:03.595238653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:28:03.596931 kubelet[3325]: E1212 17:28:03.595044 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-ln8dl" podUID="df70d9f4-d51b-472c-b1f4-6f65f02c50aa" Dec 12 17:28:03.730144 systemd-networkd[1585]: calibb8303f34d4: Link UP Dec 12 17:28:03.732815 systemd-networkd[1585]: calibb8303f34d4: Gained carrier Dec 12 17:28:03.769443 containerd[2012]: 2025-12-12 17:28:03.571 [INFO][5489] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--55-k8s-goldmane--7c778bb748--c927k-eth0 goldmane-7c778bb748- calico-system 87213e25-1478-4b01-ac6c-54452f7f57dd 886 0 2025-12-12 17:27:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-16-55 goldmane-7c778bb748-c927k eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calibb8303f34d4 [] [] }} ContainerID="fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609" Namespace="calico-system" Pod="goldmane-7c778bb748-c927k" WorkloadEndpoint="ip--172--31--16--55-k8s-goldmane--7c778bb748--c927k-" Dec 12 17:28:03.769443 containerd[2012]: 2025-12-12 17:28:03.572 [INFO][5489] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609" Namespace="calico-system" Pod="goldmane-7c778bb748-c927k" WorkloadEndpoint="ip--172--31--16--55-k8s-goldmane--7c778bb748--c927k-eth0" Dec 12 17:28:03.769443 containerd[2012]: 2025-12-12 17:28:03.641 [INFO][5501] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609" HandleID="k8s-pod-network.fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609" Workload="ip--172--31--16--55-k8s-goldmane--7c778bb748--c927k-eth0" Dec 12 17:28:03.769443 containerd[2012]: 2025-12-12 17:28:03.641 [INFO][5501] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609" HandleID="k8s-pod-network.fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609" Workload="ip--172--31--16--55-k8s-goldmane--7c778bb748--c927k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3870), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-55", "pod":"goldmane-7c778bb748-c927k", "timestamp":"2025-12-12 17:28:03.641465701 +0000 UTC"}, Hostname:"ip-172-31-16-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:28:03.769443 containerd[2012]: 2025-12-12 17:28:03.641 [INFO][5501] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:28:03.769443 containerd[2012]: 2025-12-12 17:28:03.641 [INFO][5501] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:28:03.769443 containerd[2012]: 2025-12-12 17:28:03.642 [INFO][5501] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-55' Dec 12 17:28:03.769443 containerd[2012]: 2025-12-12 17:28:03.662 [INFO][5501] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609" host="ip-172-31-16-55" Dec 12 17:28:03.769443 containerd[2012]: 2025-12-12 17:28:03.670 [INFO][5501] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-55" Dec 12 17:28:03.769443 containerd[2012]: 2025-12-12 17:28:03.681 [INFO][5501] ipam/ipam.go 511: Trying affinity for 192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:28:03.769443 containerd[2012]: 2025-12-12 17:28:03.685 [INFO][5501] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:28:03.769443 containerd[2012]: 2025-12-12 17:28:03.690 [INFO][5501] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.64/26 host="ip-172-31-16-55" Dec 12 17:28:03.769443 containerd[2012]: 2025-12-12 17:28:03.690 [INFO][5501] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.110.64/26 handle="k8s-pod-network.fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609" host="ip-172-31-16-55" Dec 12 17:28:03.769443 containerd[2012]: 2025-12-12 17:28:03.693 [INFO][5501] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609 Dec 12 17:28:03.769443 containerd[2012]: 2025-12-12 17:28:03.703 [INFO][5501] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.110.64/26 handle="k8s-pod-network.fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609" host="ip-172-31-16-55" Dec 12 17:28:03.769443 containerd[2012]: 2025-12-12 17:28:03.717 [INFO][5501] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.110.73/26] block=192.168.110.64/26 handle="k8s-pod-network.fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609" host="ip-172-31-16-55" Dec 12 17:28:03.769443 containerd[2012]: 2025-12-12 17:28:03.717 [INFO][5501] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.73/26] handle="k8s-pod-network.fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609" host="ip-172-31-16-55" Dec 12 17:28:03.769443 containerd[2012]: 2025-12-12 17:28:03.717 [INFO][5501] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:28:03.769443 containerd[2012]: 2025-12-12 17:28:03.717 [INFO][5501] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.110.73/26] IPv6=[] ContainerID="fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609" HandleID="k8s-pod-network.fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609" Workload="ip--172--31--16--55-k8s-goldmane--7c778bb748--c927k-eth0" Dec 12 17:28:03.773829 containerd[2012]: 2025-12-12 17:28:03.721 [INFO][5489] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609" Namespace="calico-system" Pod="goldmane-7c778bb748-c927k" WorkloadEndpoint="ip--172--31--16--55-k8s-goldmane--7c778bb748--c927k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--55-k8s-goldmane--7c778bb748--c927k-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"87213e25-1478-4b01-ac6c-54452f7f57dd", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-55", ContainerID:"", Pod:"goldmane-7c778bb748-c927k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.110.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibb8303f34d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:03.773829 containerd[2012]: 2025-12-12 17:28:03.722 [INFO][5489] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.73/32] ContainerID="fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609" Namespace="calico-system" Pod="goldmane-7c778bb748-c927k" WorkloadEndpoint="ip--172--31--16--55-k8s-goldmane--7c778bb748--c927k-eth0" Dec 12 17:28:03.773829 containerd[2012]: 2025-12-12 17:28:03.722 [INFO][5489] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb8303f34d4 ContainerID="fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609" Namespace="calico-system" Pod="goldmane-7c778bb748-c927k" WorkloadEndpoint="ip--172--31--16--55-k8s-goldmane--7c778bb748--c927k-eth0" Dec 12 17:28:03.773829 containerd[2012]: 2025-12-12 17:28:03.734 [INFO][5489] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609" Namespace="calico-system" Pod="goldmane-7c778bb748-c927k" WorkloadEndpoint="ip--172--31--16--55-k8s-goldmane--7c778bb748--c927k-eth0" Dec 12 17:28:03.773829 containerd[2012]: 2025-12-12 17:28:03.736 [INFO][5489] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609" Namespace="calico-system" Pod="goldmane-7c778bb748-c927k" WorkloadEndpoint="ip--172--31--16--55-k8s-goldmane--7c778bb748--c927k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--55-k8s-goldmane--7c778bb748--c927k-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"87213e25-1478-4b01-ac6c-54452f7f57dd", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-55", ContainerID:"fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609", Pod:"goldmane-7c778bb748-c927k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.110.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibb8303f34d4", MAC:"6a:8b:46:84:d8:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:28:03.773829 containerd[2012]: 2025-12-12 17:28:03.758 [INFO][5489] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609" Namespace="calico-system" Pod="goldmane-7c778bb748-c927k" WorkloadEndpoint="ip--172--31--16--55-k8s-goldmane--7c778bb748--c927k-eth0" Dec 12 17:28:03.778392 systemd-networkd[1585]: califa9404f72d7: Gained IPv6LL Dec 12 17:28:03.819000 audit[5517]: NETFILTER_CFG table=filter:137 family=2 entries=56 op=nft_register_chain pid=5517 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:28:03.819000 audit[5517]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28712 a0=3 a1=ffffe84b4cd0 a2=0 a3=ffffa82dcfa8 items=0 ppid=4709 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:03.819000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:28:03.840349 containerd[2012]: time="2025-12-12T17:28:03.840167954Z" level=info msg="connecting to shim fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609" address="unix:///run/containerd/s/e1eb3f34e1919921e3bfd7438dd804d76593402069f5091a94766ea7d5d16133" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:28:03.895391 systemd[1]: Started cri-containerd-fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609.scope - libcontainer container fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609. Dec 12 17:28:03.905180 containerd[2012]: time="2025-12-12T17:28:03.905114582Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:03.908306 containerd[2012]: time="2025-12-12T17:28:03.907759322Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:28:03.908306 containerd[2012]: time="2025-12-12T17:28:03.907805270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:03.909267 kubelet[3325]: E1212 17:28:03.909200 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:28:03.910145 kubelet[3325]: E1212 17:28:03.909496 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:28:03.910145 kubelet[3325]: E1212 17:28:03.909755 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-hb6pw_calico-system(8797b6d6-7a5e-4865-91c2-2bd3d90f57cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:03.910145 kubelet[3325]: E1212 17:28:03.909822 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hb6pw" podUID="8797b6d6-7a5e-4865-91c2-2bd3d90f57cf" Dec 12 17:28:03.910961 containerd[2012]: time="2025-12-12T17:28:03.910769906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:28:03.935000 audit: BPF prog-id=263 op=LOAD Dec 12 17:28:03.936000 audit: BPF prog-id=264 op=LOAD Dec 12 17:28:03.936000 audit[5539]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5527 pid=5539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:03.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662313261626361376333356166363961333730393662626135636665 Dec 12 17:28:03.937000 audit: BPF prog-id=264 op=UNLOAD Dec 12 17:28:03.937000 audit[5539]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5527 pid=5539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:03.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662313261626361376333356166363961333730393662626135636665 Dec 12 17:28:03.937000 audit: BPF prog-id=265 op=LOAD Dec 12 17:28:03.937000 audit[5539]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5527 pid=5539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:03.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662313261626361376333356166363961333730393662626135636665 Dec 12 17:28:03.937000 audit: BPF prog-id=266 op=LOAD Dec 12 17:28:03.937000 audit[5539]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5527 pid=5539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:03.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662313261626361376333356166363961333730393662626135636665 Dec 12 17:28:03.937000 audit: BPF prog-id=266 op=UNLOAD Dec 12 17:28:03.937000 audit[5539]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5527 pid=5539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:03.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662313261626361376333356166363961333730393662626135636665 Dec 12 17:28:03.937000 audit: BPF prog-id=265 op=UNLOAD Dec 12 17:28:03.937000 audit[5539]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5527 pid=5539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:03.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662313261626361376333356166363961333730393662626135636665 Dec 12 17:28:03.937000 audit: BPF prog-id=267 op=LOAD Dec 12 17:28:03.937000 audit[5539]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5527 pid=5539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:03.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662313261626361376333356166363961333730393662626135636665 Dec 12 17:28:04.018330 containerd[2012]: time="2025-12-12T17:28:04.018186215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-c927k,Uid:87213e25-1478-4b01-ac6c-54452f7f57dd,Namespace:calico-system,Attempt:0,} returns sandbox id \"fb12abca7c35af69a37096bba5cfe6e4022f0ac9fa013788fa48133194524609\"" Dec 12 17:28:04.045812 kubelet[3325]: E1212 17:28:04.045531 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hb6pw" podUID="8797b6d6-7a5e-4865-91c2-2bd3d90f57cf" Dec 12 17:28:04.060529 kubelet[3325]: E1212 17:28:04.058896 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-ln8dl" podUID="df70d9f4-d51b-472c-b1f4-6f65f02c50aa" Dec 12 17:28:04.064865 kubelet[3325]: E1212 17:28:04.064773 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-bklww" podUID="334ae9c1-5a14-4018-8c8f-986d294ed109" Dec 12 17:28:04.099026 systemd-networkd[1585]: calia278cc74aeb: Gained IPv6LL Dec 12 17:28:04.223892 containerd[2012]: time="2025-12-12T17:28:04.223736904Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:04.227905 containerd[2012]: time="2025-12-12T17:28:04.226325316Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:28:04.228197 containerd[2012]: time="2025-12-12T17:28:04.226385412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:04.228687 kubelet[3325]: E1212 17:28:04.228623 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:28:04.228821 kubelet[3325]: E1212 17:28:04.228693 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:28:04.230275 kubelet[3325]: E1212 17:28:04.230205 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6b7498b7d9-pzhlv_calico-system(baecc54d-00d4-4fc9-9061-9d5f893dca48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:04.230446 kubelet[3325]: E1212 17:28:04.230284 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b7498b7d9-pzhlv" podUID="baecc54d-00d4-4fc9-9061-9d5f893dca48" Dec 12 17:28:04.231403 containerd[2012]: time="2025-12-12T17:28:04.231337068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:28:04.377000 audit[5569]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=5569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:04.377000 audit[5569]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff9c19cf0 a2=0 a3=1 items=0 ppid=3625 pid=5569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:04.377000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:04.405000 audit[5569]: NETFILTER_CFG table=nat:139 family=2 entries=56 op=nft_register_chain pid=5569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:04.405000 audit[5569]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=fffff9c19cf0 a2=0 a3=1 items=0 ppid=3625 pid=5569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:04.405000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:04.512909 containerd[2012]: time="2025-12-12T17:28:04.512654413Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:04.515904 containerd[2012]: time="2025-12-12T17:28:04.515781481Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:28:04.516604 containerd[2012]: time="2025-12-12T17:28:04.516038449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:04.516997 kubelet[3325]: E1212 17:28:04.516948 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:28:04.517377 kubelet[3325]: E1212 17:28:04.517306 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:28:04.517664 kubelet[3325]: E1212 17:28:04.517627 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-c927k_calico-system(87213e25-1478-4b01-ac6c-54452f7f57dd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:04.517822 kubelet[3325]: E1212 17:28:04.517787 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c927k" podUID="87213e25-1478-4b01-ac6c-54452f7f57dd" Dec 12 17:28:05.064834 kubelet[3325]: E1212 17:28:05.064638 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-ln8dl" podUID="df70d9f4-d51b-472c-b1f4-6f65f02c50aa" Dec 12 17:28:05.068448 kubelet[3325]: E1212 17:28:05.068265 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c927k" podUID="87213e25-1478-4b01-ac6c-54452f7f57dd" Dec 12 17:28:05.069124 kubelet[3325]: E1212 17:28:05.069072 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b7498b7d9-pzhlv" podUID="baecc54d-00d4-4fc9-9061-9d5f893dca48" Dec 12 17:28:05.069800 kubelet[3325]: E1212 17:28:05.069291 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hb6pw" podUID="8797b6d6-7a5e-4865-91c2-2bd3d90f57cf" Dec 12 17:28:05.456000 audit[5575]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=5575 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:05.458586 kernel: kauditd_printk_skb: 208 callbacks suppressed Dec 12 17:28:05.458697 kernel: audit: type=1325 audit(1765560485.456:764): table=filter:140 family=2 entries=14 op=nft_register_rule pid=5575 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:05.456000 audit[5575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc34973e0 a2=0 a3=1 items=0 ppid=3625 pid=5575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:05.468297 kernel: audit: type=1300 audit(1765560485.456:764): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc34973e0 a2=0 a3=1 items=0 ppid=3625 pid=5575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:05.468537 kernel: audit: type=1327 audit(1765560485.456:764): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:05.456000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:05.469000 audit[5575]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=5575 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:05.474449 kernel: audit: type=1325 audit(1765560485.469:765): table=nat:141 family=2 entries=20 op=nft_register_rule pid=5575 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:05.469000 audit[5575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc34973e0 a2=0 a3=1 items=0 ppid=3625 pid=5575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:05.480969 kernel: audit: type=1300 audit(1765560485.469:765): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc34973e0 a2=0 a3=1 items=0 ppid=3625 pid=5575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:05.469000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:05.483833 kernel: audit: type=1327 audit(1765560485.469:765): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:05.570182 systemd-networkd[1585]: calibb8303f34d4: Gained IPv6LL Dec 12 17:28:07.738870 ntpd[1953]: Listen normally on 6 vxlan.calico 192.168.110.64:123 Dec 12 17:28:07.740696 ntpd[1953]: 12 Dec 17:28:07 ntpd[1953]: Listen normally on 6 vxlan.calico 192.168.110.64:123 Dec 12 17:28:07.740696 ntpd[1953]: 12 Dec 17:28:07 ntpd[1953]: Listen normally on 7 cali8baae981d08 [fe80::ecee:eeff:feee:eeee%4]:123 Dec 12 17:28:07.740696 ntpd[1953]: 12 Dec 17:28:07 ntpd[1953]: Listen normally on 8 vxlan.calico [fe80::641c:74ff:fe63:ffdd%5]:123 Dec 12 17:28:07.740696 ntpd[1953]: 12 Dec 17:28:07 ntpd[1953]: Listen normally on 9 calif70b3f1480a [fe80::ecee:eeff:feee:eeee%8]:123 Dec 12 17:28:07.740696 ntpd[1953]: 12 Dec 17:28:07 ntpd[1953]: Listen normally on 10 califa7880f10c5 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 12 17:28:07.740696 ntpd[1953]: 12 Dec 17:28:07 ntpd[1953]: Listen normally on 11 cali8bdcb2d9910 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 12 17:28:07.740696 ntpd[1953]: 12 Dec 17:28:07 ntpd[1953]: Listen normally on 12 cali8bc4ea1469c [fe80::ecee:eeff:feee:eeee%11]:123 Dec 12 17:28:07.740696 ntpd[1953]: 12 Dec 17:28:07 ntpd[1953]: Listen normally on 13 cali7570c05dca1 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 12 17:28:07.740696 ntpd[1953]: 12 Dec 17:28:07 ntpd[1953]: Listen normally on 14 calia278cc74aeb [fe80::ecee:eeff:feee:eeee%13]:123 Dec 12 17:28:07.740696 ntpd[1953]: 12 Dec 17:28:07 ntpd[1953]: Listen normally on 15 califa9404f72d7 [fe80::ecee:eeff:feee:eeee%14]:123 Dec 12 17:28:07.740696 ntpd[1953]: 12 Dec 17:28:07 ntpd[1953]: Listen normally on 16 calibb8303f34d4 [fe80::ecee:eeff:feee:eeee%15]:123 Dec 12 17:28:07.738961 ntpd[1953]: Listen normally on 7 cali8baae981d08 [fe80::ecee:eeff:feee:eeee%4]:123 Dec 12 17:28:07.739010 ntpd[1953]: Listen normally on 8 vxlan.calico [fe80::641c:74ff:fe63:ffdd%5]:123 Dec 12 17:28:07.739056 ntpd[1953]: Listen normally on 9 calif70b3f1480a [fe80::ecee:eeff:feee:eeee%8]:123 Dec 12 17:28:07.739102 ntpd[1953]: Listen normally on 10 califa7880f10c5 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 12 17:28:07.739146 ntpd[1953]: Listen normally on 11 cali8bdcb2d9910 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 12 17:28:07.739191 ntpd[1953]: Listen normally on 12 cali8bc4ea1469c [fe80::ecee:eeff:feee:eeee%11]:123 Dec 12 17:28:07.739236 ntpd[1953]: Listen normally on 13 cali7570c05dca1 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 12 17:28:07.739280 ntpd[1953]: Listen normally on 14 calia278cc74aeb [fe80::ecee:eeff:feee:eeee%13]:123 Dec 12 17:28:07.739324 ntpd[1953]: Listen normally on 15 califa9404f72d7 [fe80::ecee:eeff:feee:eeee%14]:123 Dec 12 17:28:07.739372 ntpd[1953]: Listen normally on 16 calibb8303f34d4 [fe80::ecee:eeff:feee:eeee%15]:123 Dec 12 17:28:11.427967 containerd[2012]: time="2025-12-12T17:28:11.426915019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:28:11.723381 containerd[2012]: time="2025-12-12T17:28:11.723149985Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:11.725501 containerd[2012]: time="2025-12-12T17:28:11.725404317Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:28:11.725682 containerd[2012]: time="2025-12-12T17:28:11.725426145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:11.726156 kubelet[3325]: E1212 17:28:11.726081 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:28:11.727372 kubelet[3325]: E1212 17:28:11.726763 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:28:11.727372 kubelet[3325]: E1212 17:28:11.726950 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-cbdd468db-lnm8s_calico-system(3ec49c85-5274-4ed7-b914-fc08a271b46e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:11.730254 containerd[2012]: time="2025-12-12T17:28:11.729749541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:28:11.977730 containerd[2012]: time="2025-12-12T17:28:11.977584138Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:11.980502 containerd[2012]: time="2025-12-12T17:28:11.980271898Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:28:11.980502 containerd[2012]: time="2025-12-12T17:28:11.980406934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:11.980771 kubelet[3325]: E1212 17:28:11.980675 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:28:11.980771 kubelet[3325]: E1212 17:28:11.980740 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:28:11.980958 kubelet[3325]: E1212 17:28:11.980908 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-cbdd468db-lnm8s_calico-system(3ec49c85-5274-4ed7-b914-fc08a271b46e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:11.981665 kubelet[3325]: E1212 17:28:11.981016 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cbdd468db-lnm8s" podUID="3ec49c85-5274-4ed7-b914-fc08a271b46e" Dec 12 17:28:14.427993 containerd[2012]: time="2025-12-12T17:28:14.427190134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:28:14.682870 containerd[2012]: time="2025-12-12T17:28:14.682665276Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:14.685225 containerd[2012]: time="2025-12-12T17:28:14.685073940Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:28:14.685602 containerd[2012]: time="2025-12-12T17:28:14.685149984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:14.686458 kubelet[3325]: E1212 17:28:14.685407 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:14.686458 kubelet[3325]: E1212 17:28:14.685468 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:14.686458 kubelet[3325]: E1212 17:28:14.685605 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5fcd4b4759-jhpmk_calico-apiserver(146b6197-b092-48f2-948f-08d710a51bd7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:14.686458 kubelet[3325]: E1212 17:28:14.685661 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcd4b4759-jhpmk" podUID="146b6197-b092-48f2-948f-08d710a51bd7" Dec 12 17:28:15.430111 containerd[2012]: time="2025-12-12T17:28:15.427834331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:28:15.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.16.55:22-139.178.68.195:41230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:15.456386 systemd[1]: Started sshd@7-172.31.16.55:22-139.178.68.195:41230.service - OpenSSH per-connection server daemon (139.178.68.195:41230). Dec 12 17:28:15.469972 kernel: audit: type=1130 audit(1765560495.454:766): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.16.55:22-139.178.68.195:41230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:15.689000 audit[5592]: USER_ACCT pid=5592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:15.697654 sshd[5592]: Accepted publickey for core from 139.178.68.195 port 41230 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:28:15.699898 kernel: audit: type=1101 audit(1765560495.689:767): pid=5592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:15.698000 audit[5592]: CRED_ACQ pid=5592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:15.713415 kernel: audit: type=1103 audit(1765560495.698:768): pid=5592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:15.713585 kernel: audit: type=1006 audit(1765560495.698:769): pid=5592 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Dec 12 17:28:15.698000 audit[5592]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffda09ae20 a2=3 a3=0 items=0 ppid=1 pid=5592 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.715167 sshd-session[5592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:15.698000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:15.726256 kernel: audit: type=1300 audit(1765560495.698:769): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffda09ae20 a2=3 a3=0 items=0 ppid=1 pid=5592 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.726386 kernel: audit: type=1327 audit(1765560495.698:769): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:15.740529 systemd-logind[1962]: New session 8 of user core. Dec 12 17:28:15.748214 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 17:28:15.749324 containerd[2012]: time="2025-12-12T17:28:15.749265133Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:15.751649 containerd[2012]: time="2025-12-12T17:28:15.751494301Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:28:15.751649 containerd[2012]: time="2025-12-12T17:28:15.751569265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:15.753350 kubelet[3325]: E1212 17:28:15.753159 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:28:15.753350 kubelet[3325]: E1212 17:28:15.753232 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:28:15.756657 kubelet[3325]: E1212 17:28:15.753358 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-c927k_calico-system(87213e25-1478-4b01-ac6c-54452f7f57dd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:15.756657 kubelet[3325]: E1212 17:28:15.753411 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c927k" podUID="87213e25-1478-4b01-ac6c-54452f7f57dd" Dec 12 17:28:15.761000 audit[5592]: USER_START pid=5592 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:15.774984 kernel: audit: type=1105 audit(1765560495.761:770): pid=5592 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:15.773000 audit[5598]: CRED_ACQ pid=5598 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:15.782006 kernel: audit: type=1103 audit(1765560495.773:771): pid=5598 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:15.985717 sshd[5598]: Connection closed by 139.178.68.195 port 41230 Dec 12 17:28:15.986144 sshd-session[5592]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:15.988000 audit[5592]: USER_END pid=5592 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:15.996078 systemd[1]: sshd@7-172.31.16.55:22-139.178.68.195:41230.service: Deactivated successfully. Dec 12 17:28:15.988000 audit[5592]: CRED_DISP pid=5592 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:15.997966 kernel: audit: type=1106 audit(1765560495.988:772): pid=5592 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:16.004728 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 17:28:15.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.16.55:22-139.178.68.195:41230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:16.009028 kernel: audit: type=1104 audit(1765560495.988:773): pid=5592 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:16.011439 systemd-logind[1962]: Session 8 logged out. Waiting for processes to exit. Dec 12 17:28:16.013914 systemd-logind[1962]: Removed session 8. Dec 12 17:28:17.426614 containerd[2012]: time="2025-12-12T17:28:17.426269665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:28:17.706575 containerd[2012]: time="2025-12-12T17:28:17.706406343Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:17.708966 containerd[2012]: time="2025-12-12T17:28:17.708895419Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:28:17.709074 containerd[2012]: time="2025-12-12T17:28:17.709012935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:17.709311 kubelet[3325]: E1212 17:28:17.709255 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:17.710191 kubelet[3325]: E1212 17:28:17.709322 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:17.710191 kubelet[3325]: E1212 17:28:17.709442 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b6bc6ffc4-ln8dl_calico-apiserver(df70d9f4-d51b-472c-b1f4-6f65f02c50aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:17.710191 kubelet[3325]: E1212 17:28:17.709491 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-ln8dl" podUID="df70d9f4-d51b-472c-b1f4-6f65f02c50aa" Dec 12 17:28:18.425557 containerd[2012]: time="2025-12-12T17:28:18.425429186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:28:18.723398 containerd[2012]: time="2025-12-12T17:28:18.723227092Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:18.725655 containerd[2012]: time="2025-12-12T17:28:18.725518660Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:28:18.725655 containerd[2012]: time="2025-12-12T17:28:18.725590756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:18.726028 kubelet[3325]: E1212 17:28:18.725945 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:28:18.727120 kubelet[3325]: E1212 17:28:18.726046 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:28:18.727120 kubelet[3325]: E1212 17:28:18.726301 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-hb6pw_calico-system(8797b6d6-7a5e-4865-91c2-2bd3d90f57cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:18.729909 containerd[2012]: time="2025-12-12T17:28:18.729774544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:28:19.067525 containerd[2012]: time="2025-12-12T17:28:19.067316365Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:19.069551 containerd[2012]: time="2025-12-12T17:28:19.069447265Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:28:19.070027 containerd[2012]: time="2025-12-12T17:28:19.069511765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:19.070279 kubelet[3325]: E1212 17:28:19.070217 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:28:19.070574 kubelet[3325]: E1212 17:28:19.070291 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:28:19.070574 kubelet[3325]: E1212 17:28:19.070440 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-hb6pw_calico-system(8797b6d6-7a5e-4865-91c2-2bd3d90f57cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:19.070574 kubelet[3325]: E1212 17:28:19.070514 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hb6pw" podUID="8797b6d6-7a5e-4865-91c2-2bd3d90f57cf" Dec 12 17:28:19.427965 containerd[2012]: time="2025-12-12T17:28:19.427582623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:28:19.721544 containerd[2012]: time="2025-12-12T17:28:19.721118561Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:19.724634 containerd[2012]: time="2025-12-12T17:28:19.724392293Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:28:19.724634 containerd[2012]: time="2025-12-12T17:28:19.724456325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:19.725302 kubelet[3325]: E1212 17:28:19.725179 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:19.725302 kubelet[3325]: E1212 17:28:19.725275 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:19.725838 kubelet[3325]: E1212 17:28:19.725467 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b6bc6ffc4-bklww_calico-apiserver(334ae9c1-5a14-4018-8c8f-986d294ed109): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:19.725994 kubelet[3325]: E1212 17:28:19.725896 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-bklww" podUID="334ae9c1-5a14-4018-8c8f-986d294ed109" Dec 12 17:28:20.425407 containerd[2012]: time="2025-12-12T17:28:20.425332936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:28:20.717444 containerd[2012]: time="2025-12-12T17:28:20.716991042Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:20.719304 containerd[2012]: time="2025-12-12T17:28:20.719164734Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:28:20.719304 containerd[2012]: time="2025-12-12T17:28:20.719224434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:20.719497 kubelet[3325]: E1212 17:28:20.719459 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:28:20.720052 kubelet[3325]: E1212 17:28:20.719518 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:28:20.720052 kubelet[3325]: E1212 17:28:20.719628 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6b7498b7d9-pzhlv_calico-system(baecc54d-00d4-4fc9-9061-9d5f893dca48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:20.720052 kubelet[3325]: E1212 17:28:20.719677 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b7498b7d9-pzhlv" podUID="baecc54d-00d4-4fc9-9061-9d5f893dca48" Dec 12 17:28:21.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.16.55:22-139.178.68.195:33834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:21.025277 systemd[1]: Started sshd@8-172.31.16.55:22-139.178.68.195:33834.service - OpenSSH per-connection server daemon (139.178.68.195:33834). Dec 12 17:28:21.028600 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:21.028680 kernel: audit: type=1130 audit(1765560501.024:775): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.16.55:22-139.178.68.195:33834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:21.213000 audit[5617]: USER_ACCT pid=5617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:21.221728 sshd[5617]: Accepted publickey for core from 139.178.68.195 port 33834 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:28:21.220000 audit[5617]: CRED_ACQ pid=5617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:21.223561 sshd-session[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:21.229145 kernel: audit: type=1101 audit(1765560501.213:776): pid=5617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:21.229222 kernel: audit: type=1103 audit(1765560501.220:777): pid=5617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:21.229899 kernel: audit: type=1006 audit(1765560501.220:778): pid=5617 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 12 17:28:21.220000 audit[5617]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcca1dd80 a2=3 a3=0 items=0 ppid=1 pid=5617 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:21.239310 kernel: audit: type=1300 audit(1765560501.220:778): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcca1dd80 a2=3 a3=0 items=0 ppid=1 pid=5617 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:21.220000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:21.242991 kernel: audit: type=1327 audit(1765560501.220:778): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:21.248755 systemd-logind[1962]: New session 9 of user core. Dec 12 17:28:21.256167 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 17:28:21.260000 audit[5617]: USER_START pid=5617 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:21.271212 kernel: audit: type=1105 audit(1765560501.260:779): pid=5617 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:21.271352 kernel: audit: type=1103 audit(1765560501.269:780): pid=5620 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:21.269000 audit[5620]: CRED_ACQ pid=5620 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:21.466561 sshd[5620]: Connection closed by 139.178.68.195 port 33834 Dec 12 17:28:21.469151 sshd-session[5617]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:21.469000 audit[5617]: USER_END pid=5617 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:21.477060 systemd[1]: sshd@8-172.31.16.55:22-139.178.68.195:33834.service: Deactivated successfully. Dec 12 17:28:21.481919 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 17:28:21.470000 audit[5617]: CRED_DISP pid=5617 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:21.491796 kernel: audit: type=1106 audit(1765560501.469:781): pid=5617 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:21.491923 kernel: audit: type=1104 audit(1765560501.470:782): pid=5617 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:21.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.16.55:22-139.178.68.195:33834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:21.493971 systemd-logind[1962]: Session 9 logged out. Waiting for processes to exit. Dec 12 17:28:21.496399 systemd-logind[1962]: Removed session 9. Dec 12 17:28:26.430313 kubelet[3325]: E1212 17:28:26.430122 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cbdd468db-lnm8s" podUID="3ec49c85-5274-4ed7-b914-fc08a271b46e" Dec 12 17:28:26.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.16.55:22-139.178.68.195:33850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:26.505457 systemd[1]: Started sshd@9-172.31.16.55:22-139.178.68.195:33850.service - OpenSSH per-connection server daemon (139.178.68.195:33850). Dec 12 17:28:26.507336 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:26.507417 kernel: audit: type=1130 audit(1765560506.503:784): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.16.55:22-139.178.68.195:33850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:26.705000 audit[5662]: USER_ACCT pid=5662 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:26.713867 sshd[5662]: Accepted publickey for core from 139.178.68.195 port 33850 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:28:26.712000 audit[5662]: CRED_ACQ pid=5662 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:26.720604 kernel: audit: type=1101 audit(1765560506.705:785): pid=5662 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:26.720736 kernel: audit: type=1103 audit(1765560506.712:786): pid=5662 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:26.716214 sshd-session[5662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:26.723275 kernel: audit: type=1006 audit(1765560506.713:787): pid=5662 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 12 17:28:26.713000 audit[5662]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffea942090 a2=3 a3=0 items=0 ppid=1 pid=5662 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:26.732767 kernel: audit: type=1300 audit(1765560506.713:787): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffea942090 a2=3 a3=0 items=0 ppid=1 pid=5662 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:26.733580 kernel: audit: type=1327 audit(1765560506.713:787): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:26.713000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:26.742974 systemd-logind[1962]: New session 10 of user core. Dec 12 17:28:26.748175 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 17:28:26.752000 audit[5662]: USER_START pid=5662 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:26.761959 kernel: audit: type=1105 audit(1765560506.752:788): pid=5662 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:26.760000 audit[5665]: CRED_ACQ pid=5665 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:26.767986 kernel: audit: type=1103 audit(1765560506.760:789): pid=5665 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:26.953125 sshd[5665]: Connection closed by 139.178.68.195 port 33850 Dec 12 17:28:26.982165 sshd-session[5662]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:26.985000 audit[5662]: USER_END pid=5662 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:26.994141 systemd[1]: sshd@9-172.31.16.55:22-139.178.68.195:33850.service: Deactivated successfully. Dec 12 17:28:26.985000 audit[5662]: CRED_DISP pid=5662 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:27.001106 kernel: audit: type=1106 audit(1765560506.985:790): pid=5662 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:27.001456 kernel: audit: type=1104 audit(1765560506.985:791): pid=5662 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:26.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.16.55:22-139.178.68.195:33850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:27.003066 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 17:28:27.006759 systemd-logind[1962]: Session 10 logged out. Waiting for processes to exit. Dec 12 17:28:27.015497 systemd[1]: Started sshd@10-172.31.16.55:22-139.178.68.195:33858.service - OpenSSH per-connection server daemon (139.178.68.195:33858). Dec 12 17:28:27.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.16.55:22-139.178.68.195:33858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:27.019680 systemd-logind[1962]: Removed session 10. Dec 12 17:28:27.220000 audit[5678]: USER_ACCT pid=5678 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:27.222545 sshd[5678]: Accepted publickey for core from 139.178.68.195 port 33858 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:28:27.222000 audit[5678]: CRED_ACQ pid=5678 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:27.223000 audit[5678]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff72e54d0 a2=3 a3=0 items=0 ppid=1 pid=5678 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:27.223000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:27.226200 sshd-session[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:27.235962 systemd-logind[1962]: New session 11 of user core. Dec 12 17:28:27.248167 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 17:28:27.252000 audit[5678]: USER_START pid=5678 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:27.255000 audit[5681]: CRED_ACQ pid=5681 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:27.536008 sshd[5681]: Connection closed by 139.178.68.195 port 33858 Dec 12 17:28:27.535807 sshd-session[5678]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:27.538000 audit[5678]: USER_END pid=5678 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:27.538000 audit[5678]: CRED_DISP pid=5678 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:27.551101 systemd[1]: sshd@10-172.31.16.55:22-139.178.68.195:33858.service: Deactivated successfully. Dec 12 17:28:27.551722 systemd-logind[1962]: Session 11 logged out. Waiting for processes to exit. Dec 12 17:28:27.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.16.55:22-139.178.68.195:33858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:27.562483 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 17:28:27.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.16.55:22-139.178.68.195:33874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:27.595072 systemd[1]: Started sshd@11-172.31.16.55:22-139.178.68.195:33874.service - OpenSSH per-connection server daemon (139.178.68.195:33874). Dec 12 17:28:27.601127 systemd-logind[1962]: Removed session 11. Dec 12 17:28:27.789000 audit[5691]: USER_ACCT pid=5691 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:27.792394 sshd[5691]: Accepted publickey for core from 139.178.68.195 port 33874 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:28:27.792000 audit[5691]: CRED_ACQ pid=5691 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:27.792000 audit[5691]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd433df40 a2=3 a3=0 items=0 ppid=1 pid=5691 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:27.792000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:27.795595 sshd-session[5691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:27.805089 systemd-logind[1962]: New session 12 of user core. Dec 12 17:28:27.818158 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 17:28:27.822000 audit[5691]: USER_START pid=5691 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:27.825000 audit[5694]: CRED_ACQ pid=5694 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:28.014594 sshd[5694]: Connection closed by 139.178.68.195 port 33874 Dec 12 17:28:28.015758 sshd-session[5691]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:28.019000 audit[5691]: USER_END pid=5691 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:28.019000 audit[5691]: CRED_DISP pid=5691 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:28.024983 systemd[1]: sshd@11-172.31.16.55:22-139.178.68.195:33874.service: Deactivated successfully. Dec 12 17:28:28.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.16.55:22-139.178.68.195:33874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:28.030051 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 17:28:28.036927 systemd-logind[1962]: Session 12 logged out. Waiting for processes to exit. Dec 12 17:28:28.038805 systemd-logind[1962]: Removed session 12. Dec 12 17:28:28.425932 kubelet[3325]: E1212 17:28:28.425866 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-ln8dl" podUID="df70d9f4-d51b-472c-b1f4-6f65f02c50aa" Dec 12 17:28:29.427085 kubelet[3325]: E1212 17:28:29.426922 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcd4b4759-jhpmk" podUID="146b6197-b092-48f2-948f-08d710a51bd7" Dec 12 17:28:31.424727 kubelet[3325]: E1212 17:28:31.424652 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c927k" podUID="87213e25-1478-4b01-ac6c-54452f7f57dd" Dec 12 17:28:32.425520 kubelet[3325]: E1212 17:28:32.425210 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b7498b7d9-pzhlv" podUID="baecc54d-00d4-4fc9-9061-9d5f893dca48" Dec 12 17:28:32.427394 kubelet[3325]: E1212 17:28:32.426682 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hb6pw" podUID="8797b6d6-7a5e-4865-91c2-2bd3d90f57cf" Dec 12 17:28:33.056169 systemd[1]: Started sshd@12-172.31.16.55:22-139.178.68.195:44186.service - OpenSSH per-connection server daemon (139.178.68.195:44186). Dec 12 17:28:33.064049 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 12 17:28:33.064170 kernel: audit: type=1130 audit(1765560513.055:811): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.16.55:22-139.178.68.195:44186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:33.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.16.55:22-139.178.68.195:44186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:33.247000 audit[5707]: USER_ACCT pid=5707 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:33.255373 sshd[5707]: Accepted publickey for core from 139.178.68.195 port 44186 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:28:33.254000 audit[5707]: CRED_ACQ pid=5707 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:33.261734 kernel: audit: type=1101 audit(1765560513.247:812): pid=5707 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:33.261812 kernel: audit: type=1103 audit(1765560513.254:813): pid=5707 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:33.256680 sshd-session[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:33.265740 kernel: audit: type=1006 audit(1765560513.254:814): pid=5707 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 12 17:28:33.254000 audit[5707]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe4374800 a2=3 a3=0 items=0 ppid=1 pid=5707 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:33.266953 kernel: audit: type=1300 audit(1765560513.254:814): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe4374800 a2=3 a3=0 items=0 ppid=1 pid=5707 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:33.254000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:33.274966 kernel: audit: type=1327 audit(1765560513.254:814): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:33.281139 systemd-logind[1962]: New session 13 of user core. Dec 12 17:28:33.287280 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 17:28:33.294000 audit[5707]: USER_START pid=5707 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:33.302000 audit[5710]: CRED_ACQ pid=5710 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:33.310597 kernel: audit: type=1105 audit(1765560513.294:815): pid=5707 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:33.310781 kernel: audit: type=1103 audit(1765560513.302:816): pid=5710 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:33.496587 sshd[5710]: Connection closed by 139.178.68.195 port 44186 Dec 12 17:28:33.497570 sshd-session[5707]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:33.499000 audit[5707]: USER_END pid=5707 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:33.509131 systemd[1]: sshd@12-172.31.16.55:22-139.178.68.195:44186.service: Deactivated successfully. Dec 12 17:28:33.499000 audit[5707]: CRED_DISP pid=5707 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:33.516264 kernel: audit: type=1106 audit(1765560513.499:817): pid=5707 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:33.516348 kernel: audit: type=1104 audit(1765560513.499:818): pid=5707 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:33.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.16.55:22-139.178.68.195:44186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:33.518810 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 17:28:33.527117 systemd-logind[1962]: Session 13 logged out. Waiting for processes to exit. Dec 12 17:28:33.529112 systemd-logind[1962]: Removed session 13. Dec 12 17:28:35.426393 kubelet[3325]: E1212 17:28:35.426224 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-bklww" podUID="334ae9c1-5a14-4018-8c8f-986d294ed109" Dec 12 17:28:38.427110 containerd[2012]: time="2025-12-12T17:28:38.427039822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:28:38.539532 systemd[1]: Started sshd@13-172.31.16.55:22-139.178.68.195:44200.service - OpenSSH per-connection server daemon (139.178.68.195:44200). Dec 12 17:28:38.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.16.55:22-139.178.68.195:44200 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:38.547825 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:38.547977 kernel: audit: type=1130 audit(1765560518.539:820): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.16.55:22-139.178.68.195:44200 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:38.705230 containerd[2012]: time="2025-12-12T17:28:38.703937555Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:38.707310 containerd[2012]: time="2025-12-12T17:28:38.707248739Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:28:38.707700 containerd[2012]: time="2025-12-12T17:28:38.707299955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:38.708358 kubelet[3325]: E1212 17:28:38.708021 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:28:38.710341 kubelet[3325]: E1212 17:28:38.708404 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:28:38.710341 kubelet[3325]: E1212 17:28:38.708667 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-cbdd468db-lnm8s_calico-system(3ec49c85-5274-4ed7-b914-fc08a271b46e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:38.711760 containerd[2012]: time="2025-12-12T17:28:38.711703571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:28:38.755000 audit[5728]: USER_ACCT pid=5728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:38.763652 sshd[5728]: Accepted publickey for core from 139.178.68.195 port 44200 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:28:38.764000 audit[5728]: CRED_ACQ pid=5728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:38.772154 kernel: audit: type=1101 audit(1765560518.755:821): pid=5728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:38.772277 kernel: audit: type=1103 audit(1765560518.764:822): pid=5728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:38.766773 sshd-session[5728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:38.778951 kernel: audit: type=1006 audit(1765560518.764:823): pid=5728 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 12 17:28:38.789717 kernel: audit: type=1300 audit(1765560518.764:823): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc00cd510 a2=3 a3=0 items=0 ppid=1 pid=5728 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:38.764000 audit[5728]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc00cd510 a2=3 a3=0 items=0 ppid=1 pid=5728 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:38.788968 systemd-logind[1962]: New session 14 of user core. Dec 12 17:28:38.764000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:38.794910 kernel: audit: type=1327 audit(1765560518.764:823): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:38.796620 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 17:28:38.804000 audit[5728]: USER_START pid=5728 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:38.814895 kernel: audit: type=1105 audit(1765560518.804:824): pid=5728 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:38.815000 audit[5731]: CRED_ACQ pid=5731 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:38.826903 kernel: audit: type=1103 audit(1765560518.815:825): pid=5731 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:39.155757 sshd[5731]: Connection closed by 139.178.68.195 port 44200 Dec 12 17:28:39.156980 sshd-session[5728]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:39.158000 audit[5728]: USER_END pid=5728 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:39.170014 containerd[2012]: time="2025-12-12T17:28:39.169135389Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:39.158000 audit[5728]: CRED_DISP pid=5728 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:39.175946 kernel: audit: type=1106 audit(1765560519.158:826): pid=5728 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:39.176097 kernel: audit: type=1104 audit(1765560519.158:827): pid=5728 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:39.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.16.55:22-139.178.68.195:44200 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:39.181417 containerd[2012]: time="2025-12-12T17:28:39.177228753Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:28:39.181417 containerd[2012]: time="2025-12-12T17:28:39.177356757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:39.181657 kubelet[3325]: E1212 17:28:39.178104 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:28:39.181657 kubelet[3325]: E1212 17:28:39.178187 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:28:39.181657 kubelet[3325]: E1212 17:28:39.178380 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-cbdd468db-lnm8s_calico-system(3ec49c85-5274-4ed7-b914-fc08a271b46e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:39.181657 kubelet[3325]: E1212 17:28:39.178491 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cbdd468db-lnm8s" podUID="3ec49c85-5274-4ed7-b914-fc08a271b46e" Dec 12 17:28:39.176384 systemd[1]: sshd@13-172.31.16.55:22-139.178.68.195:44200.service: Deactivated successfully. Dec 12 17:28:39.185326 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 17:28:39.193694 systemd-logind[1962]: Session 14 logged out. Waiting for processes to exit. Dec 12 17:28:39.196928 systemd-logind[1962]: Removed session 14. Dec 12 17:28:43.429755 containerd[2012]: time="2025-12-12T17:28:43.429666842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:28:43.686698 containerd[2012]: time="2025-12-12T17:28:43.686533012Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:43.689107 containerd[2012]: time="2025-12-12T17:28:43.688953592Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:28:43.689107 containerd[2012]: time="2025-12-12T17:28:43.689032216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:43.689533 kubelet[3325]: E1212 17:28:43.689477 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:43.690419 kubelet[3325]: E1212 17:28:43.689545 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:43.690495 containerd[2012]: time="2025-12-12T17:28:43.690162544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:28:43.690563 kubelet[3325]: E1212 17:28:43.690521 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b6bc6ffc4-ln8dl_calico-apiserver(df70d9f4-d51b-472c-b1f4-6f65f02c50aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:43.690625 kubelet[3325]: E1212 17:28:43.690585 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-ln8dl" podUID="df70d9f4-d51b-472c-b1f4-6f65f02c50aa" Dec 12 17:28:44.006557 containerd[2012]: time="2025-12-12T17:28:44.006400717Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:44.008778 containerd[2012]: time="2025-12-12T17:28:44.008649433Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:28:44.009100 containerd[2012]: time="2025-12-12T17:28:44.008714161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:44.009213 kubelet[3325]: E1212 17:28:44.008943 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:28:44.009213 kubelet[3325]: E1212 17:28:44.008997 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:28:44.009213 kubelet[3325]: E1212 17:28:44.009111 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-c927k_calico-system(87213e25-1478-4b01-ac6c-54452f7f57dd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:44.011332 kubelet[3325]: E1212 17:28:44.011275 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c927k" podUID="87213e25-1478-4b01-ac6c-54452f7f57dd" Dec 12 17:28:44.200688 systemd[1]: Started sshd@14-172.31.16.55:22-139.178.68.195:48724.service - OpenSSH per-connection server daemon (139.178.68.195:48724). Dec 12 17:28:44.208616 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:44.208765 kernel: audit: type=1130 audit(1765560524.200:829): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.16.55:22-139.178.68.195:48724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:44.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.16.55:22-139.178.68.195:48724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:44.397000 audit[5753]: USER_ACCT pid=5753 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:44.404952 sshd[5753]: Accepted publickey for core from 139.178.68.195 port 48724 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:28:44.403000 audit[5753]: CRED_ACQ pid=5753 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:44.405903 kernel: audit: type=1101 audit(1765560524.397:830): pid=5753 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:44.407014 sshd-session[5753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:44.415412 kernel: audit: type=1103 audit(1765560524.403:831): pid=5753 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:44.415535 kernel: audit: type=1006 audit(1765560524.403:832): pid=5753 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 12 17:28:44.416057 kernel: audit: type=1300 audit(1765560524.403:832): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc0ad4580 a2=3 a3=0 items=0 ppid=1 pid=5753 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:44.403000 audit[5753]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc0ad4580 a2=3 a3=0 items=0 ppid=1 pid=5753 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:44.424670 kernel: audit: type=1327 audit(1765560524.403:832): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:44.403000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:44.429161 containerd[2012]: time="2025-12-12T17:28:44.429081867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:28:44.432732 systemd-logind[1962]: New session 15 of user core. Dec 12 17:28:44.443221 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 17:28:44.452000 audit[5753]: USER_START pid=5753 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:44.478376 kernel: audit: type=1105 audit(1765560524.452:833): pid=5753 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:44.478506 kernel: audit: type=1103 audit(1765560524.461:834): pid=5756 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:44.461000 audit[5756]: CRED_ACQ pid=5756 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:44.695912 containerd[2012]: time="2025-12-12T17:28:44.695718449Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:44.698099 containerd[2012]: time="2025-12-12T17:28:44.698018381Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:28:44.698470 containerd[2012]: time="2025-12-12T17:28:44.698070389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:44.698541 kubelet[3325]: E1212 17:28:44.698459 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:44.698541 kubelet[3325]: E1212 17:28:44.698520 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:44.699215 kubelet[3325]: E1212 17:28:44.698771 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5fcd4b4759-jhpmk_calico-apiserver(146b6197-b092-48f2-948f-08d710a51bd7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:44.699215 kubelet[3325]: E1212 17:28:44.698829 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcd4b4759-jhpmk" podUID="146b6197-b092-48f2-948f-08d710a51bd7" Dec 12 17:28:44.700213 sshd[5756]: Connection closed by 139.178.68.195 port 48724 Dec 12 17:28:44.701724 sshd-session[5753]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:44.703198 containerd[2012]: time="2025-12-12T17:28:44.702151601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:28:44.706000 audit[5753]: USER_END pid=5753 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:44.706000 audit[5753]: CRED_DISP pid=5753 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:44.717791 systemd[1]: sshd@14-172.31.16.55:22-139.178.68.195:48724.service: Deactivated successfully. Dec 12 17:28:44.719746 kernel: audit: type=1106 audit(1765560524.706:835): pid=5753 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:44.720676 kernel: audit: type=1104 audit(1765560524.706:836): pid=5753 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:44.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.16.55:22-139.178.68.195:48724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:44.727411 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 17:28:44.732672 systemd-logind[1962]: Session 15 logged out. Waiting for processes to exit. Dec 12 17:28:44.740575 systemd-logind[1962]: Removed session 15. Dec 12 17:28:44.992289 containerd[2012]: time="2025-12-12T17:28:44.992081946Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:44.995017 containerd[2012]: time="2025-12-12T17:28:44.994932210Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:28:44.995194 containerd[2012]: time="2025-12-12T17:28:44.994984254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:44.996209 kubelet[3325]: E1212 17:28:44.995424 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:28:44.996209 kubelet[3325]: E1212 17:28:44.995490 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:28:44.996209 kubelet[3325]: E1212 17:28:44.995595 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6b7498b7d9-pzhlv_calico-system(baecc54d-00d4-4fc9-9061-9d5f893dca48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:44.996209 kubelet[3325]: E1212 17:28:44.995646 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b7498b7d9-pzhlv" podUID="baecc54d-00d4-4fc9-9061-9d5f893dca48" Dec 12 17:28:46.425604 containerd[2012]: time="2025-12-12T17:28:46.425477681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:28:46.696310 containerd[2012]: time="2025-12-12T17:28:46.696136087Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:46.698591 containerd[2012]: time="2025-12-12T17:28:46.698475427Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:28:46.698770 containerd[2012]: time="2025-12-12T17:28:46.698546191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:46.699005 kubelet[3325]: E1212 17:28:46.698966 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:46.700061 kubelet[3325]: E1212 17:28:46.699024 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:46.700061 kubelet[3325]: E1212 17:28:46.699129 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b6bc6ffc4-bklww_calico-apiserver(334ae9c1-5a14-4018-8c8f-986d294ed109): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:46.700061 kubelet[3325]: E1212 17:28:46.699201 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-bklww" podUID="334ae9c1-5a14-4018-8c8f-986d294ed109" Dec 12 17:28:47.429829 containerd[2012]: time="2025-12-12T17:28:47.429492546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:28:47.718976 containerd[2012]: time="2025-12-12T17:28:47.718810172Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:47.721241 containerd[2012]: time="2025-12-12T17:28:47.721146248Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:28:47.721522 containerd[2012]: time="2025-12-12T17:28:47.721147640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:47.723108 kubelet[3325]: E1212 17:28:47.722244 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:28:47.723108 kubelet[3325]: E1212 17:28:47.722312 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:28:47.723108 kubelet[3325]: E1212 17:28:47.722415 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-hb6pw_calico-system(8797b6d6-7a5e-4865-91c2-2bd3d90f57cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:47.726272 containerd[2012]: time="2025-12-12T17:28:47.726203720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:28:48.020507 containerd[2012]: time="2025-12-12T17:28:48.020346041Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:48.023732 containerd[2012]: time="2025-12-12T17:28:48.023650877Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:28:48.023926 containerd[2012]: time="2025-12-12T17:28:48.023776865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:48.024247 kubelet[3325]: E1212 17:28:48.024164 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:28:48.024344 kubelet[3325]: E1212 17:28:48.024252 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:28:48.024407 kubelet[3325]: E1212 17:28:48.024367 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-hb6pw_calico-system(8797b6d6-7a5e-4865-91c2-2bd3d90f57cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:48.024488 kubelet[3325]: E1212 17:28:48.024429 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hb6pw" podUID="8797b6d6-7a5e-4865-91c2-2bd3d90f57cf" Dec 12 17:28:49.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.16.55:22-139.178.68.195:48738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:49.741938 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:49.742004 kernel: audit: type=1130 audit(1765560529.739:838): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.16.55:22-139.178.68.195:48738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:49.740078 systemd[1]: Started sshd@15-172.31.16.55:22-139.178.68.195:48738.service - OpenSSH per-connection server daemon (139.178.68.195:48738). Dec 12 17:28:49.929000 audit[5768]: USER_ACCT pid=5768 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:49.936310 sshd[5768]: Accepted publickey for core from 139.178.68.195 port 48738 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:28:49.936960 kernel: audit: type=1101 audit(1765560529.929:839): pid=5768 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:49.937049 kernel: audit: type=1103 audit(1765560529.935:840): pid=5768 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:49.935000 audit[5768]: CRED_ACQ pid=5768 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:49.938359 sshd-session[5768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:49.943293 kernel: audit: type=1006 audit(1765560529.936:841): pid=5768 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 12 17:28:49.936000 audit[5768]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc6508be0 a2=3 a3=0 items=0 ppid=1 pid=5768 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:49.953224 kernel: audit: type=1300 audit(1765560529.936:841): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc6508be0 a2=3 a3=0 items=0 ppid=1 pid=5768 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:49.953798 kernel: audit: type=1327 audit(1765560529.936:841): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:49.936000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:49.959075 systemd-logind[1962]: New session 16 of user core. Dec 12 17:28:49.968223 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 17:28:49.974000 audit[5768]: USER_START pid=5768 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:49.983063 kernel: audit: type=1105 audit(1765560529.974:842): pid=5768 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:49.982000 audit[5771]: CRED_ACQ pid=5771 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:49.989130 kernel: audit: type=1103 audit(1765560529.982:843): pid=5771 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:50.180425 sshd[5771]: Connection closed by 139.178.68.195 port 48738 Dec 12 17:28:50.181478 sshd-session[5768]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:50.183000 audit[5768]: USER_END pid=5768 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:50.192021 systemd[1]: sshd@15-172.31.16.55:22-139.178.68.195:48738.service: Deactivated successfully. Dec 12 17:28:50.184000 audit[5768]: CRED_DISP pid=5768 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:50.195902 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 17:28:50.198016 kernel: audit: type=1106 audit(1765560530.183:844): pid=5768 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:50.198112 kernel: audit: type=1104 audit(1765560530.184:845): pid=5768 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:50.200612 systemd-logind[1962]: Session 16 logged out. Waiting for processes to exit. Dec 12 17:28:50.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.16.55:22-139.178.68.195:48738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:50.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.16.55:22-139.178.68.195:51696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:50.220651 systemd-logind[1962]: Removed session 16. Dec 12 17:28:50.221923 systemd[1]: Started sshd@16-172.31.16.55:22-139.178.68.195:51696.service - OpenSSH per-connection server daemon (139.178.68.195:51696). Dec 12 17:28:50.422000 audit[5782]: USER_ACCT pid=5782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:50.424883 sshd[5782]: Accepted publickey for core from 139.178.68.195 port 51696 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:28:50.424000 audit[5782]: CRED_ACQ pid=5782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:50.425000 audit[5782]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffef90c0a0 a2=3 a3=0 items=0 ppid=1 pid=5782 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:50.425000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:50.426778 sshd-session[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:50.436972 systemd-logind[1962]: New session 17 of user core. Dec 12 17:28:50.443204 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 17:28:50.449000 audit[5782]: USER_START pid=5782 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:50.453000 audit[5785]: CRED_ACQ pid=5785 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:50.921335 sshd[5785]: Connection closed by 139.178.68.195 port 51696 Dec 12 17:28:50.922405 sshd-session[5782]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:50.925000 audit[5782]: USER_END pid=5782 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:50.926000 audit[5782]: CRED_DISP pid=5782 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:50.931788 systemd[1]: sshd@16-172.31.16.55:22-139.178.68.195:51696.service: Deactivated successfully. Dec 12 17:28:50.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.16.55:22-139.178.68.195:51696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:50.936415 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 17:28:50.940071 systemd-logind[1962]: Session 17 logged out. Waiting for processes to exit. Dec 12 17:28:50.959449 systemd-logind[1962]: Removed session 17. Dec 12 17:28:50.961443 systemd[1]: Started sshd@17-172.31.16.55:22-139.178.68.195:51710.service - OpenSSH per-connection server daemon (139.178.68.195:51710). Dec 12 17:28:50.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.16.55:22-139.178.68.195:51710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:51.164000 audit[5795]: USER_ACCT pid=5795 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:51.165422 sshd[5795]: Accepted publickey for core from 139.178.68.195 port 51710 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:28:51.165000 audit[5795]: CRED_ACQ pid=5795 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:51.166000 audit[5795]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdd8bf130 a2=3 a3=0 items=0 ppid=1 pid=5795 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:51.166000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:51.167818 sshd-session[5795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:51.179049 systemd-logind[1962]: New session 18 of user core. Dec 12 17:28:51.185244 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 17:28:51.192000 audit[5795]: USER_START pid=5795 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:51.195000 audit[5798]: CRED_ACQ pid=5798 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:51.430621 kubelet[3325]: E1212 17:28:51.430110 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cbdd468db-lnm8s" podUID="3ec49c85-5274-4ed7-b914-fc08a271b46e" Dec 12 17:28:52.354048 sshd[5798]: Connection closed by 139.178.68.195 port 51710 Dec 12 17:28:52.355082 sshd-session[5795]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:52.360000 audit[5795]: USER_END pid=5795 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:52.360000 audit[5795]: CRED_DISP pid=5795 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:52.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.16.55:22-139.178.68.195:51710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:52.367940 systemd[1]: sshd@17-172.31.16.55:22-139.178.68.195:51710.service: Deactivated successfully. Dec 12 17:28:52.377021 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 17:28:52.385309 systemd-logind[1962]: Session 18 logged out. Waiting for processes to exit. Dec 12 17:28:52.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.16.55:22-139.178.68.195:51716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:52.406380 systemd[1]: Started sshd@18-172.31.16.55:22-139.178.68.195:51716.service - OpenSSH per-connection server daemon (139.178.68.195:51716). Dec 12 17:28:52.412816 systemd-logind[1962]: Removed session 18. Dec 12 17:28:52.432000 audit[5808]: NETFILTER_CFG table=filter:142 family=2 entries=26 op=nft_register_rule pid=5808 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:52.432000 audit[5808]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffff3b27800 a2=0 a3=1 items=0 ppid=3625 pid=5808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:52.432000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:52.438000 audit[5808]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5808 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:52.438000 audit[5808]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff3b27800 a2=0 a3=1 items=0 ppid=3625 pid=5808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:52.438000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:52.615000 audit[5813]: USER_ACCT pid=5813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:52.617197 sshd[5813]: Accepted publickey for core from 139.178.68.195 port 51716 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:28:52.618000 audit[5813]: CRED_ACQ pid=5813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:52.618000 audit[5813]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffce385080 a2=3 a3=0 items=0 ppid=1 pid=5813 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:52.618000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:52.620709 sshd-session[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:52.630958 systemd-logind[1962]: New session 19 of user core. Dec 12 17:28:52.636184 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 17:28:52.643000 audit[5813]: USER_START pid=5813 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:52.646000 audit[5816]: CRED_ACQ pid=5816 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:53.163458 sshd[5816]: Connection closed by 139.178.68.195 port 51716 Dec 12 17:28:53.164007 sshd-session[5813]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:53.170000 audit[5813]: USER_END pid=5813 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:53.170000 audit[5813]: CRED_DISP pid=5813 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:53.179475 systemd-logind[1962]: Session 19 logged out. Waiting for processes to exit. Dec 12 17:28:53.181331 systemd[1]: sshd@18-172.31.16.55:22-139.178.68.195:51716.service: Deactivated successfully. Dec 12 17:28:53.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.16.55:22-139.178.68.195:51716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:53.191649 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 17:28:53.212649 systemd[1]: Started sshd@19-172.31.16.55:22-139.178.68.195:51730.service - OpenSSH per-connection server daemon (139.178.68.195:51730). Dec 12 17:28:53.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.16.55:22-139.178.68.195:51730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:53.215499 systemd-logind[1962]: Removed session 19. Dec 12 17:28:53.403000 audit[5826]: USER_ACCT pid=5826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:53.405151 sshd[5826]: Accepted publickey for core from 139.178.68.195 port 51730 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:28:53.405000 audit[5826]: CRED_ACQ pid=5826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:53.405000 audit[5826]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd909a8b0 a2=3 a3=0 items=0 ppid=1 pid=5826 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:53.405000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:53.407763 sshd-session[5826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:53.417978 systemd-logind[1962]: New session 20 of user core. Dec 12 17:28:53.426195 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 17:28:53.433000 audit[5826]: USER_START pid=5826 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:53.437000 audit[5829]: CRED_ACQ pid=5829 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:53.475000 audit[5834]: NETFILTER_CFG table=filter:144 family=2 entries=38 op=nft_register_rule pid=5834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:53.475000 audit[5834]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffccc79090 a2=0 a3=1 items=0 ppid=3625 pid=5834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:53.475000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:53.481000 audit[5834]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:53.481000 audit[5834]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffccc79090 a2=0 a3=1 items=0 ppid=3625 pid=5834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:53.481000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:53.631688 sshd[5829]: Connection closed by 139.178.68.195 port 51730 Dec 12 17:28:53.632585 sshd-session[5826]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:53.635000 audit[5826]: USER_END pid=5826 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:53.635000 audit[5826]: CRED_DISP pid=5826 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:53.641633 systemd[1]: sshd@19-172.31.16.55:22-139.178.68.195:51730.service: Deactivated successfully. Dec 12 17:28:53.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.16.55:22-139.178.68.195:51730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:53.645816 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 17:28:53.648619 systemd-logind[1962]: Session 20 logged out. Waiting for processes to exit. Dec 12 17:28:53.653072 systemd-logind[1962]: Removed session 20. Dec 12 17:28:56.425667 kubelet[3325]: E1212 17:28:56.425594 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-ln8dl" podUID="df70d9f4-d51b-472c-b1f4-6f65f02c50aa" Dec 12 17:28:57.428108 kubelet[3325]: E1212 17:28:57.428018 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcd4b4759-jhpmk" podUID="146b6197-b092-48f2-948f-08d710a51bd7" Dec 12 17:28:57.430646 kubelet[3325]: E1212 17:28:57.430419 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-bklww" podUID="334ae9c1-5a14-4018-8c8f-986d294ed109" Dec 12 17:28:57.433878 kubelet[3325]: E1212 17:28:57.433768 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c927k" podUID="87213e25-1478-4b01-ac6c-54452f7f57dd" Dec 12 17:28:58.678154 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 12 17:28:58.678316 kernel: audit: type=1130 audit(1765560538.670:887): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.16.55:22-139.178.68.195:51740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:58.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.16.55:22-139.178.68.195:51740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:58.670958 systemd[1]: Started sshd@20-172.31.16.55:22-139.178.68.195:51740.service - OpenSSH per-connection server daemon (139.178.68.195:51740). Dec 12 17:28:58.859000 audit[5867]: USER_ACCT pid=5867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:58.866459 sshd[5867]: Accepted publickey for core from 139.178.68.195 port 51740 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:28:58.868424 sshd-session[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:58.866000 audit[5867]: CRED_ACQ pid=5867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:58.874807 kernel: audit: type=1101 audit(1765560538.859:888): pid=5867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:58.874951 kernel: audit: type=1103 audit(1765560538.866:889): pid=5867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:58.876724 kernel: audit: type=1006 audit(1765560538.866:890): pid=5867 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 12 17:28:58.866000 audit[5867]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe13ee380 a2=3 a3=0 items=0 ppid=1 pid=5867 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:58.886268 kernel: audit: type=1300 audit(1765560538.866:890): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe13ee380 a2=3 a3=0 items=0 ppid=1 pid=5867 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:58.866000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:58.889017 kernel: audit: type=1327 audit(1765560538.866:890): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:58.895801 systemd-logind[1962]: New session 21 of user core. Dec 12 17:28:58.904189 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 17:28:58.911000 audit[5867]: USER_START pid=5867 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:58.919966 kernel: audit: type=1105 audit(1765560538.911:891): pid=5867 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:58.920000 audit[5870]: CRED_ACQ pid=5870 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:58.926933 kernel: audit: type=1103 audit(1765560538.920:892): pid=5870 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:59.124729 sshd[5870]: Connection closed by 139.178.68.195 port 51740 Dec 12 17:28:59.125740 sshd-session[5867]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:59.129000 audit[5867]: USER_END pid=5867 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:59.139511 systemd[1]: sshd@20-172.31.16.55:22-139.178.68.195:51740.service: Deactivated successfully. Dec 12 17:28:59.129000 audit[5867]: CRED_DISP pid=5867 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:59.150602 kernel: audit: type=1106 audit(1765560539.129:893): pid=5867 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:59.150777 kernel: audit: type=1104 audit(1765560539.129:894): pid=5867 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:28:59.144270 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 17:28:59.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.16.55:22-139.178.68.195:51740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:59.152581 systemd-logind[1962]: Session 21 logged out. Waiting for processes to exit. Dec 12 17:28:59.155690 systemd-logind[1962]: Removed session 21. Dec 12 17:28:59.426398 kubelet[3325]: E1212 17:28:59.426214 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b7498b7d9-pzhlv" podUID="baecc54d-00d4-4fc9-9061-9d5f893dca48" Dec 12 17:28:59.790000 audit[5882]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=5882 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:59.790000 audit[5882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe6c36d10 a2=0 a3=1 items=0 ppid=3625 pid=5882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:59.790000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:59.799000 audit[5882]: NETFILTER_CFG table=nat:147 family=2 entries=104 op=nft_register_chain pid=5882 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:59.799000 audit[5882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffe6c36d10 a2=0 a3=1 items=0 ppid=3625 pid=5882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:59.799000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:29:00.432397 kubelet[3325]: E1212 17:29:00.429311 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hb6pw" podUID="8797b6d6-7a5e-4865-91c2-2bd3d90f57cf" Dec 12 17:29:04.165451 systemd[1]: Started sshd@21-172.31.16.55:22-139.178.68.195:53216.service - OpenSSH per-connection server daemon (139.178.68.195:53216). Dec 12 17:29:04.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.16.55:22-139.178.68.195:53216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:04.169320 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 12 17:29:04.169561 kernel: audit: type=1130 audit(1765560544.164:898): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.16.55:22-139.178.68.195:53216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:04.376000 audit[5886]: USER_ACCT pid=5886 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:04.380151 sshd[5886]: Accepted publickey for core from 139.178.68.195 port 53216 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:29:04.384000 audit[5886]: CRED_ACQ pid=5886 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:04.386829 sshd-session[5886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:04.391294 kernel: audit: type=1101 audit(1765560544.376:899): pid=5886 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:04.391402 kernel: audit: type=1103 audit(1765560544.384:900): pid=5886 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:04.395515 kernel: audit: type=1006 audit(1765560544.384:901): pid=5886 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 12 17:29:04.384000 audit[5886]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffa2bf490 a2=3 a3=0 items=0 ppid=1 pid=5886 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:04.402970 kernel: audit: type=1300 audit(1765560544.384:901): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffa2bf490 a2=3 a3=0 items=0 ppid=1 pid=5886 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:04.384000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:04.405934 kernel: audit: type=1327 audit(1765560544.384:901): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:04.415957 systemd-logind[1962]: New session 22 of user core. Dec 12 17:29:04.420264 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 17:29:04.431000 audit[5886]: USER_START pid=5886 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:04.433564 kubelet[3325]: E1212 17:29:04.432970 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cbdd468db-lnm8s" podUID="3ec49c85-5274-4ed7-b914-fc08a271b46e" Dec 12 17:29:04.445904 kernel: audit: type=1105 audit(1765560544.431:902): pid=5886 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:04.445000 audit[5889]: CRED_ACQ pid=5889 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:04.454940 kernel: audit: type=1103 audit(1765560544.445:903): pid=5889 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:04.646637 sshd[5889]: Connection closed by 139.178.68.195 port 53216 Dec 12 17:29:04.647189 sshd-session[5886]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:04.651000 audit[5886]: USER_END pid=5886 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:04.661004 systemd[1]: sshd@21-172.31.16.55:22-139.178.68.195:53216.service: Deactivated successfully. Dec 12 17:29:04.668985 kernel: audit: type=1106 audit(1765560544.651:904): pid=5886 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:04.669119 kernel: audit: type=1104 audit(1765560544.651:905): pid=5886 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:04.651000 audit[5886]: CRED_DISP pid=5886 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:04.670644 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 17:29:04.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.16.55:22-139.178.68.195:53216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:04.673944 systemd-logind[1962]: Session 22 logged out. Waiting for processes to exit. Dec 12 17:29:04.682839 systemd-logind[1962]: Removed session 22. Dec 12 17:29:08.426711 kubelet[3325]: E1212 17:29:08.426638 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcd4b4759-jhpmk" podUID="146b6197-b092-48f2-948f-08d710a51bd7" Dec 12 17:29:09.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.16.55:22-139.178.68.195:53230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:09.687151 systemd[1]: Started sshd@22-172.31.16.55:22-139.178.68.195:53230.service - OpenSSH per-connection server daemon (139.178.68.195:53230). Dec 12 17:29:09.689087 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:29:09.689146 kernel: audit: type=1130 audit(1765560549.686:907): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.16.55:22-139.178.68.195:53230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:09.899000 audit[5903]: USER_ACCT pid=5903 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:09.907092 sshd[5903]: Accepted publickey for core from 139.178.68.195 port 53230 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:29:09.909696 sshd-session[5903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:09.907000 audit[5903]: CRED_ACQ pid=5903 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:09.918444 kernel: audit: type=1101 audit(1765560549.899:908): pid=5903 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:09.918598 kernel: audit: type=1103 audit(1765560549.907:909): pid=5903 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:09.921004 kernel: audit: type=1006 audit(1765560549.908:910): pid=5903 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 12 17:29:09.908000 audit[5903]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd63d5440 a2=3 a3=0 items=0 ppid=1 pid=5903 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:09.930985 kernel: audit: type=1300 audit(1765560549.908:910): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd63d5440 a2=3 a3=0 items=0 ppid=1 pid=5903 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:09.908000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:09.936351 kernel: audit: type=1327 audit(1765560549.908:910): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:09.943324 systemd-logind[1962]: New session 23 of user core. Dec 12 17:29:09.947201 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 17:29:09.957000 audit[5903]: USER_START pid=5903 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:09.968934 kernel: audit: type=1105 audit(1765560549.957:911): pid=5903 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:09.970000 audit[5906]: CRED_ACQ pid=5906 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:09.977908 kernel: audit: type=1103 audit(1765560549.970:912): pid=5906 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:10.244447 sshd[5906]: Connection closed by 139.178.68.195 port 53230 Dec 12 17:29:10.245568 sshd-session[5903]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:10.249000 audit[5903]: USER_END pid=5903 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:10.263742 systemd[1]: sshd@22-172.31.16.55:22-139.178.68.195:53230.service: Deactivated successfully. Dec 12 17:29:10.249000 audit[5903]: CRED_DISP pid=5903 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:10.277901 kernel: audit: type=1106 audit(1765560550.249:913): pid=5903 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:10.278017 kernel: audit: type=1104 audit(1765560550.249:914): pid=5903 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:10.279540 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 17:29:10.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.16.55:22-139.178.68.195:53230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:10.286113 systemd-logind[1962]: Session 23 logged out. Waiting for processes to exit. Dec 12 17:29:10.291001 systemd-logind[1962]: Removed session 23. Dec 12 17:29:10.425631 kubelet[3325]: E1212 17:29:10.425557 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-ln8dl" podUID="df70d9f4-d51b-472c-b1f4-6f65f02c50aa" Dec 12 17:29:10.428206 kubelet[3325]: E1212 17:29:10.426026 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-bklww" podUID="334ae9c1-5a14-4018-8c8f-986d294ed109" Dec 12 17:29:11.429451 kubelet[3325]: E1212 17:29:11.429247 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c927k" podUID="87213e25-1478-4b01-ac6c-54452f7f57dd" Dec 12 17:29:11.432297 kubelet[3325]: E1212 17:29:11.431311 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b7498b7d9-pzhlv" podUID="baecc54d-00d4-4fc9-9061-9d5f893dca48" Dec 12 17:29:12.427492 kubelet[3325]: E1212 17:29:12.427318 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hb6pw" podUID="8797b6d6-7a5e-4865-91c2-2bd3d90f57cf" Dec 12 17:29:15.289349 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:29:15.289599 kernel: audit: type=1130 audit(1765560555.281:916): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.16.55:22-139.178.68.195:43706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:15.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.16.55:22-139.178.68.195:43706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:15.282463 systemd[1]: Started sshd@23-172.31.16.55:22-139.178.68.195:43706.service - OpenSSH per-connection server daemon (139.178.68.195:43706). Dec 12 17:29:15.428787 kubelet[3325]: E1212 17:29:15.428637 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cbdd468db-lnm8s" podUID="3ec49c85-5274-4ed7-b914-fc08a271b46e" Dec 12 17:29:15.494122 sshd[5922]: Accepted publickey for core from 139.178.68.195 port 43706 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:29:15.493000 audit[5922]: USER_ACCT pid=5922 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:15.501000 audit[5922]: CRED_ACQ pid=5922 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:15.508483 kernel: audit: type=1101 audit(1765560555.493:917): pid=5922 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:15.508730 kernel: audit: type=1103 audit(1765560555.501:918): pid=5922 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:15.503924 sshd-session[5922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:15.514148 kernel: audit: type=1006 audit(1765560555.501:919): pid=5922 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 12 17:29:15.522048 kernel: audit: type=1300 audit(1765560555.501:919): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffecab8680 a2=3 a3=0 items=0 ppid=1 pid=5922 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:15.501000 audit[5922]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffecab8680 a2=3 a3=0 items=0 ppid=1 pid=5922 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:15.501000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:15.526052 kernel: audit: type=1327 audit(1765560555.501:919): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:15.536064 systemd-logind[1962]: New session 24 of user core. Dec 12 17:29:15.542243 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 17:29:15.551000 audit[5922]: USER_START pid=5922 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:15.560906 kernel: audit: type=1105 audit(1765560555.551:920): pid=5922 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:15.560000 audit[5925]: CRED_ACQ pid=5925 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:15.569920 kernel: audit: type=1103 audit(1765560555.560:921): pid=5925 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:15.786184 sshd[5925]: Connection closed by 139.178.68.195 port 43706 Dec 12 17:29:15.786580 sshd-session[5922]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:15.792000 audit[5922]: USER_END pid=5922 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:15.806234 systemd[1]: sshd@23-172.31.16.55:22-139.178.68.195:43706.service: Deactivated successfully. Dec 12 17:29:15.814374 kernel: audit: type=1106 audit(1765560555.792:922): pid=5922 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:15.814781 kernel: audit: type=1104 audit(1765560555.792:923): pid=5922 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:15.792000 audit[5922]: CRED_DISP pid=5922 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:15.822078 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 17:29:15.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.16.55:22-139.178.68.195:43706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:15.829448 systemd-logind[1962]: Session 24 logged out. Waiting for processes to exit. Dec 12 17:29:15.835558 systemd-logind[1962]: Removed session 24. Dec 12 17:29:19.433171 kubelet[3325]: E1212 17:29:19.433105 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcd4b4759-jhpmk" podUID="146b6197-b092-48f2-948f-08d710a51bd7" Dec 12 17:29:20.830282 systemd[1]: Started sshd@24-172.31.16.55:22-139.178.68.195:51306.service - OpenSSH per-connection server daemon (139.178.68.195:51306). Dec 12 17:29:20.839230 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:29:20.839299 kernel: audit: type=1130 audit(1765560560.829:925): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.16.55:22-139.178.68.195:51306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:20.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.16.55:22-139.178.68.195:51306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:21.057000 audit[5943]: USER_ACCT pid=5943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:21.065790 sshd[5943]: Accepted publickey for core from 139.178.68.195 port 51306 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:29:21.065000 audit[5943]: CRED_ACQ pid=5943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:21.068071 kernel: audit: type=1101 audit(1765560561.057:926): pid=5943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:21.069391 sshd-session[5943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:21.080685 kernel: audit: type=1103 audit(1765560561.065:927): pid=5943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:21.082532 kernel: audit: type=1006 audit(1765560561.065:928): pid=5943 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 12 17:29:21.082625 kernel: audit: type=1300 audit(1765560561.065:928): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd06dba40 a2=3 a3=0 items=0 ppid=1 pid=5943 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:21.065000 audit[5943]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd06dba40 a2=3 a3=0 items=0 ppid=1 pid=5943 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:21.065000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:21.091332 kernel: audit: type=1327 audit(1765560561.065:928): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:21.103340 systemd-logind[1962]: New session 25 of user core. Dec 12 17:29:21.108250 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 12 17:29:21.115000 audit[5943]: USER_START pid=5943 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:21.123970 kernel: audit: type=1105 audit(1765560561.115:929): pid=5943 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:21.127000 audit[5946]: CRED_ACQ pid=5946 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:21.134890 kernel: audit: type=1103 audit(1765560561.127:930): pid=5946 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:21.375556 sshd[5946]: Connection closed by 139.178.68.195 port 51306 Dec 12 17:29:21.376452 sshd-session[5943]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:21.379000 audit[5943]: USER_END pid=5943 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:21.389406 systemd[1]: sshd@24-172.31.16.55:22-139.178.68.195:51306.service: Deactivated successfully. Dec 12 17:29:21.379000 audit[5943]: CRED_DISP pid=5943 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:21.399631 kernel: audit: type=1106 audit(1765560561.379:931): pid=5943 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:21.399794 kernel: audit: type=1104 audit(1765560561.379:932): pid=5943 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:21.401019 systemd[1]: session-25.scope: Deactivated successfully. Dec 12 17:29:21.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.16.55:22-139.178.68.195:51306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:21.406251 systemd-logind[1962]: Session 25 logged out. Waiting for processes to exit. Dec 12 17:29:21.412101 systemd-logind[1962]: Removed session 25. Dec 12 17:29:21.427959 kubelet[3325]: E1212 17:29:21.427714 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-bklww" podUID="334ae9c1-5a14-4018-8c8f-986d294ed109" Dec 12 17:29:22.425173 kubelet[3325]: E1212 17:29:22.425099 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-ln8dl" podUID="df70d9f4-d51b-472c-b1f4-6f65f02c50aa" Dec 12 17:29:23.427651 kubelet[3325]: E1212 17:29:23.427572 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c927k" podUID="87213e25-1478-4b01-ac6c-54452f7f57dd" Dec 12 17:29:24.425823 kubelet[3325]: E1212 17:29:24.425729 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hb6pw" podUID="8797b6d6-7a5e-4865-91c2-2bd3d90f57cf" Dec 12 17:29:25.430056 containerd[2012]: time="2025-12-12T17:29:25.429145279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:29:25.761712 containerd[2012]: time="2025-12-12T17:29:25.761135841Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:25.764030 containerd[2012]: time="2025-12-12T17:29:25.763925241Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:29:25.764489 containerd[2012]: time="2025-12-12T17:29:25.764056437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:25.765328 kubelet[3325]: E1212 17:29:25.765170 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:29:25.767429 kubelet[3325]: E1212 17:29:25.766190 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:29:25.767429 kubelet[3325]: E1212 17:29:25.766783 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6b7498b7d9-pzhlv_calico-system(baecc54d-00d4-4fc9-9061-9d5f893dca48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:25.767429 kubelet[3325]: E1212 17:29:25.767130 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b7498b7d9-pzhlv" podUID="baecc54d-00d4-4fc9-9061-9d5f893dca48" Dec 12 17:29:26.418659 systemd[1]: Started sshd@25-172.31.16.55:22-139.178.68.195:51312.service - OpenSSH per-connection server daemon (139.178.68.195:51312). Dec 12 17:29:26.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.16.55:22-139.178.68.195:51312 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:26.423278 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:29:26.423511 kernel: audit: type=1130 audit(1765560566.418:934): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.16.55:22-139.178.68.195:51312 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:26.660000 audit[5985]: USER_ACCT pid=5985 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:26.668991 sshd[5985]: Accepted publickey for core from 139.178.68.195 port 51312 ssh2: RSA SHA256:UpPM+0tNfNI5Eum+RXqais+c5qf/UrTYct83Ztza4aE Dec 12 17:29:26.670433 sshd-session[5985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:26.671912 kernel: audit: type=1101 audit(1765560566.660:935): pid=5985 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:26.668000 audit[5985]: CRED_ACQ pid=5985 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:26.683033 kernel: audit: type=1103 audit(1765560566.668:936): pid=5985 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:26.683206 kernel: audit: type=1006 audit(1765560566.668:937): pid=5985 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 12 17:29:26.668000 audit[5985]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcbd12090 a2=3 a3=0 items=0 ppid=1 pid=5985 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:26.690150 kernel: audit: type=1300 audit(1765560566.668:937): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcbd12090 a2=3 a3=0 items=0 ppid=1 pid=5985 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:26.692996 kernel: audit: type=1327 audit(1765560566.668:937): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:26.668000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:26.704884 systemd-logind[1962]: New session 26 of user core. Dec 12 17:29:26.712267 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 12 17:29:26.720000 audit[5985]: USER_START pid=5985 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:26.730934 kernel: audit: type=1105 audit(1765560566.720:938): pid=5985 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:26.731000 audit[5989]: CRED_ACQ pid=5989 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:26.740935 kernel: audit: type=1103 audit(1765560566.731:939): pid=5989 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:27.004968 sshd[5989]: Connection closed by 139.178.68.195 port 51312 Dec 12 17:29:27.006330 sshd-session[5985]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:27.014000 audit[5985]: USER_END pid=5985 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:27.027213 systemd[1]: sshd@25-172.31.16.55:22-139.178.68.195:51312.service: Deactivated successfully. Dec 12 17:29:27.040659 kernel: audit: type=1106 audit(1765560567.014:940): pid=5985 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:27.040827 kernel: audit: type=1104 audit(1765560567.014:941): pid=5985 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:27.014000 audit[5985]: CRED_DISP pid=5985 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 12 17:29:27.042627 systemd[1]: session-26.scope: Deactivated successfully. Dec 12 17:29:27.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.16.55:22-139.178.68.195:51312 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:27.051574 systemd-logind[1962]: Session 26 logged out. Waiting for processes to exit. Dec 12 17:29:27.061468 systemd-logind[1962]: Removed session 26. Dec 12 17:29:27.426693 containerd[2012]: time="2025-12-12T17:29:27.426440697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:29:27.716625 containerd[2012]: time="2025-12-12T17:29:27.715840966Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:27.720691 containerd[2012]: time="2025-12-12T17:29:27.719040118Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:29:27.720691 containerd[2012]: time="2025-12-12T17:29:27.719181430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:27.720971 kubelet[3325]: E1212 17:29:27.719384 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:29:27.720971 kubelet[3325]: E1212 17:29:27.719441 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:29:27.720971 kubelet[3325]: E1212 17:29:27.719545 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-cbdd468db-lnm8s_calico-system(3ec49c85-5274-4ed7-b914-fc08a271b46e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:27.725627 containerd[2012]: time="2025-12-12T17:29:27.725564518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:29:27.995200 containerd[2012]: time="2025-12-12T17:29:27.994414164Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:27.996829 containerd[2012]: time="2025-12-12T17:29:27.996725556Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:29:27.996829 containerd[2012]: time="2025-12-12T17:29:27.996779184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:27.997199 kubelet[3325]: E1212 17:29:27.997137 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:29:27.997275 kubelet[3325]: E1212 17:29:27.997221 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:29:27.997810 kubelet[3325]: E1212 17:29:27.997381 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-cbdd468db-lnm8s_calico-system(3ec49c85-5274-4ed7-b914-fc08a271b46e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:27.997810 kubelet[3325]: E1212 17:29:27.997478 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cbdd468db-lnm8s" podUID="3ec49c85-5274-4ed7-b914-fc08a271b46e" Dec 12 17:29:30.427509 containerd[2012]: time="2025-12-12T17:29:30.427432224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:29:30.685516 containerd[2012]: time="2025-12-12T17:29:30.685346545Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:30.687577 containerd[2012]: time="2025-12-12T17:29:30.687508585Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:29:30.687722 containerd[2012]: time="2025-12-12T17:29:30.687624937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:30.688088 kubelet[3325]: E1212 17:29:30.688008 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:29:30.688598 kubelet[3325]: E1212 17:29:30.688084 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:29:30.688598 kubelet[3325]: E1212 17:29:30.688202 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5fcd4b4759-jhpmk_calico-apiserver(146b6197-b092-48f2-948f-08d710a51bd7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:30.688598 kubelet[3325]: E1212 17:29:30.688254 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcd4b4759-jhpmk" podUID="146b6197-b092-48f2-948f-08d710a51bd7" Dec 12 17:29:33.425141 containerd[2012]: time="2025-12-12T17:29:33.425025423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:29:33.702834 containerd[2012]: time="2025-12-12T17:29:33.702642988Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:33.705073 containerd[2012]: time="2025-12-12T17:29:33.704991664Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:29:33.705073 containerd[2012]: time="2025-12-12T17:29:33.705026692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:33.705533 kubelet[3325]: E1212 17:29:33.705319 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:29:33.705533 kubelet[3325]: E1212 17:29:33.705381 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:29:33.705533 kubelet[3325]: E1212 17:29:33.705517 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b6bc6ffc4-bklww_calico-apiserver(334ae9c1-5a14-4018-8c8f-986d294ed109): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:33.706250 kubelet[3325]: E1212 17:29:33.705570 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-bklww" podUID="334ae9c1-5a14-4018-8c8f-986d294ed109" Dec 12 17:29:35.426360 containerd[2012]: time="2025-12-12T17:29:35.426297461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:29:35.706911 containerd[2012]: time="2025-12-12T17:29:35.706725942Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:35.709070 containerd[2012]: time="2025-12-12T17:29:35.708993378Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:29:35.709248 containerd[2012]: time="2025-12-12T17:29:35.709128990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:35.709773 kubelet[3325]: E1212 17:29:35.709476 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:29:35.709773 kubelet[3325]: E1212 17:29:35.709540 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:29:35.709773 kubelet[3325]: E1212 17:29:35.709676 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b6bc6ffc4-ln8dl_calico-apiserver(df70d9f4-d51b-472c-b1f4-6f65f02c50aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:35.709773 kubelet[3325]: E1212 17:29:35.709726 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-ln8dl" podUID="df70d9f4-d51b-472c-b1f4-6f65f02c50aa" Dec 12 17:29:37.429336 containerd[2012]: time="2025-12-12T17:29:37.429264295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:29:37.703580 containerd[2012]: time="2025-12-12T17:29:37.703366364Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:37.705732 containerd[2012]: time="2025-12-12T17:29:37.705586064Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:29:37.706035 containerd[2012]: time="2025-12-12T17:29:37.705667760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:37.706125 kubelet[3325]: E1212 17:29:37.706003 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:29:37.706125 kubelet[3325]: E1212 17:29:37.706094 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:29:37.707407 kubelet[3325]: E1212 17:29:37.706520 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-hb6pw_calico-system(8797b6d6-7a5e-4865-91c2-2bd3d90f57cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:37.708140 containerd[2012]: time="2025-12-12T17:29:37.707814440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:29:37.990280 containerd[2012]: time="2025-12-12T17:29:37.990068805Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:37.992411 containerd[2012]: time="2025-12-12T17:29:37.992346825Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:29:37.992531 containerd[2012]: time="2025-12-12T17:29:37.992474025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:37.992795 kubelet[3325]: E1212 17:29:37.992740 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:29:37.992913 kubelet[3325]: E1212 17:29:37.992807 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:29:37.993119 kubelet[3325]: E1212 17:29:37.993076 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-c927k_calico-system(87213e25-1478-4b01-ac6c-54452f7f57dd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:37.993209 kubelet[3325]: E1212 17:29:37.993142 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c927k" podUID="87213e25-1478-4b01-ac6c-54452f7f57dd" Dec 12 17:29:37.993604 containerd[2012]: time="2025-12-12T17:29:37.993556497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:29:38.309784 containerd[2012]: time="2025-12-12T17:29:38.309698443Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:38.312012 containerd[2012]: time="2025-12-12T17:29:38.311952967Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:29:38.312141 containerd[2012]: time="2025-12-12T17:29:38.312076459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:38.312389 kubelet[3325]: E1212 17:29:38.312331 3325 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:29:38.312507 kubelet[3325]: E1212 17:29:38.312400 3325 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:29:38.312576 kubelet[3325]: E1212 17:29:38.312507 3325 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-hb6pw_calico-system(8797b6d6-7a5e-4865-91c2-2bd3d90f57cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:38.312658 kubelet[3325]: E1212 17:29:38.312576 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hb6pw" podUID="8797b6d6-7a5e-4865-91c2-2bd3d90f57cf" Dec 12 17:29:40.425779 kubelet[3325]: E1212 17:29:40.425649 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b7498b7d9-pzhlv" podUID="baecc54d-00d4-4fc9-9061-9d5f893dca48" Dec 12 17:29:41.180353 systemd[1]: cri-containerd-d8102fc62d1086a5601ac09860a8b0f08fce17e17114be647f75d26fa1b2ffc3.scope: Deactivated successfully. Dec 12 17:29:41.180991 systemd[1]: cri-containerd-d8102fc62d1086a5601ac09860a8b0f08fce17e17114be647f75d26fa1b2ffc3.scope: Consumed 24.679s CPU time, 110.2M memory peak. Dec 12 17:29:41.186156 containerd[2012]: time="2025-12-12T17:29:41.186089409Z" level=info msg="received container exit event container_id:\"d8102fc62d1086a5601ac09860a8b0f08fce17e17114be647f75d26fa1b2ffc3\" id:\"d8102fc62d1086a5601ac09860a8b0f08fce17e17114be647f75d26fa1b2ffc3\" pid:3834 exit_status:1 exited_at:{seconds:1765560581 nanos:185471997}" Dec 12 17:29:41.186000 audit: BPF prog-id=153 op=UNLOAD Dec 12 17:29:41.188840 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:29:41.188950 kernel: audit: type=1334 audit(1765560581.186:943): prog-id=153 op=UNLOAD Dec 12 17:29:41.186000 audit: BPF prog-id=157 op=UNLOAD Dec 12 17:29:41.194925 kernel: audit: type=1334 audit(1765560581.186:944): prog-id=157 op=UNLOAD Dec 12 17:29:41.239524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8102fc62d1086a5601ac09860a8b0f08fce17e17114be647f75d26fa1b2ffc3-rootfs.mount: Deactivated successfully. Dec 12 17:29:41.337839 systemd[1]: cri-containerd-adda53a9b3f8f419a411be9eb221985f6e6edc2d3e4f496b97a44eb9fc7b6391.scope: Deactivated successfully. Dec 12 17:29:41.338608 systemd[1]: cri-containerd-adda53a9b3f8f419a411be9eb221985f6e6edc2d3e4f496b97a44eb9fc7b6391.scope: Consumed 6.292s CPU time, 61.1M memory peak. Dec 12 17:29:41.340000 audit: BPF prog-id=268 op=LOAD Dec 12 17:29:41.342881 kernel: audit: type=1334 audit(1765560581.340:945): prog-id=268 op=LOAD Dec 12 17:29:41.340000 audit: BPF prog-id=95 op=UNLOAD Dec 12 17:29:41.345505 containerd[2012]: time="2025-12-12T17:29:41.345364774Z" level=info msg="received container exit event container_id:\"adda53a9b3f8f419a411be9eb221985f6e6edc2d3e4f496b97a44eb9fc7b6391\" id:\"adda53a9b3f8f419a411be9eb221985f6e6edc2d3e4f496b97a44eb9fc7b6391\" pid:3153 exit_status:1 exited_at:{seconds:1765560581 nanos:343652386}" Dec 12 17:29:41.347498 kernel: audit: type=1334 audit(1765560581.340:946): prog-id=95 op=UNLOAD Dec 12 17:29:41.347585 kernel: audit: type=1334 audit(1765560581.341:947): prog-id=110 op=UNLOAD Dec 12 17:29:41.341000 audit: BPF prog-id=110 op=UNLOAD Dec 12 17:29:41.350689 kernel: audit: type=1334 audit(1765560581.341:948): prog-id=114 op=UNLOAD Dec 12 17:29:41.341000 audit: BPF prog-id=114 op=UNLOAD Dec 12 17:29:41.395406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adda53a9b3f8f419a411be9eb221985f6e6edc2d3e4f496b97a44eb9fc7b6391-rootfs.mount: Deactivated successfully. Dec 12 17:29:41.428343 kubelet[3325]: E1212 17:29:41.428254 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cbdd468db-lnm8s" podUID="3ec49c85-5274-4ed7-b914-fc08a271b46e" Dec 12 17:29:41.497446 kubelet[3325]: I1212 17:29:41.497324 3325 scope.go:117] "RemoveContainer" containerID="adda53a9b3f8f419a411be9eb221985f6e6edc2d3e4f496b97a44eb9fc7b6391" Dec 12 17:29:41.501246 kubelet[3325]: I1212 17:29:41.501180 3325 scope.go:117] "RemoveContainer" containerID="d8102fc62d1086a5601ac09860a8b0f08fce17e17114be647f75d26fa1b2ffc3" Dec 12 17:29:41.505517 containerd[2012]: time="2025-12-12T17:29:41.505459043Z" level=info msg="CreateContainer within sandbox \"ac2cab7dcd35a71f435e177dcd2b02d1edb8bd28619c0243bf09bd5844b25d2d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 12 17:29:41.505953 containerd[2012]: time="2025-12-12T17:29:41.505459139Z" level=info msg="CreateContainer within sandbox \"b5525fbba541687bc4c990607e68864a00a0dc3054f1b1c97dc82f9721e585d3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 12 17:29:41.531215 containerd[2012]: time="2025-12-12T17:29:41.530238647Z" level=info msg="Container ef7e03aaab95e575093fdda7dd63469a28615acaa3bbb36d88634822c7f6cbcd: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:41.547173 containerd[2012]: time="2025-12-12T17:29:41.547097399Z" level=info msg="Container e74c01f6be407bcf738eb1a8c089576683a243ccf66a5212adb4a26eee6a75ee: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:41.548625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1985332234.mount: Deactivated successfully. Dec 12 17:29:41.559627 containerd[2012]: time="2025-12-12T17:29:41.559547939Z" level=info msg="CreateContainer within sandbox \"b5525fbba541687bc4c990607e68864a00a0dc3054f1b1c97dc82f9721e585d3\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"ef7e03aaab95e575093fdda7dd63469a28615acaa3bbb36d88634822c7f6cbcd\"" Dec 12 17:29:41.559999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount175485684.mount: Deactivated successfully. Dec 12 17:29:41.562272 containerd[2012]: time="2025-12-12T17:29:41.562206335Z" level=info msg="StartContainer for \"ef7e03aaab95e575093fdda7dd63469a28615acaa3bbb36d88634822c7f6cbcd\"" Dec 12 17:29:41.564013 containerd[2012]: time="2025-12-12T17:29:41.563911631Z" level=info msg="connecting to shim ef7e03aaab95e575093fdda7dd63469a28615acaa3bbb36d88634822c7f6cbcd" address="unix:///run/containerd/s/cc8da83a028868cc65001f3a3be703833c91d412b00ce5ea61956dcae2d40e79" protocol=ttrpc version=3 Dec 12 17:29:41.575120 containerd[2012]: time="2025-12-12T17:29:41.575046203Z" level=info msg="CreateContainer within sandbox \"ac2cab7dcd35a71f435e177dcd2b02d1edb8bd28619c0243bf09bd5844b25d2d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e74c01f6be407bcf738eb1a8c089576683a243ccf66a5212adb4a26eee6a75ee\"" Dec 12 17:29:41.576470 containerd[2012]: time="2025-12-12T17:29:41.576419939Z" level=info msg="StartContainer for \"e74c01f6be407bcf738eb1a8c089576683a243ccf66a5212adb4a26eee6a75ee\"" Dec 12 17:29:41.583352 containerd[2012]: time="2025-12-12T17:29:41.583276271Z" level=info msg="connecting to shim e74c01f6be407bcf738eb1a8c089576683a243ccf66a5212adb4a26eee6a75ee" address="unix:///run/containerd/s/828992ed9b0d0293b59169a9faf9232ce4f1db369ee4f99cba40b38d58ade74d" protocol=ttrpc version=3 Dec 12 17:29:41.610358 systemd[1]: Started cri-containerd-ef7e03aaab95e575093fdda7dd63469a28615acaa3bbb36d88634822c7f6cbcd.scope - libcontainer container ef7e03aaab95e575093fdda7dd63469a28615acaa3bbb36d88634822c7f6cbcd. Dec 12 17:29:41.635316 systemd[1]: Started cri-containerd-e74c01f6be407bcf738eb1a8c089576683a243ccf66a5212adb4a26eee6a75ee.scope - libcontainer container e74c01f6be407bcf738eb1a8c089576683a243ccf66a5212adb4a26eee6a75ee. Dec 12 17:29:41.658000 audit: BPF prog-id=269 op=LOAD Dec 12 17:29:41.663939 kernel: audit: type=1334 audit(1765560581.658:949): prog-id=269 op=LOAD Dec 12 17:29:41.664095 kernel: audit: type=1334 audit(1765560581.661:950): prog-id=270 op=LOAD Dec 12 17:29:41.661000 audit: BPF prog-id=270 op=LOAD Dec 12 17:29:41.661000 audit[6050]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3623 pid=6050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:41.671485 kernel: audit: type=1300 audit(1765560581.661:950): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3623 pid=6050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:41.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566376530336161616239356535373530393366646461376464363334 Dec 12 17:29:41.679452 kernel: audit: type=1327 audit(1765560581.661:950): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566376530336161616239356535373530393366646461376464363334 Dec 12 17:29:41.661000 audit: BPF prog-id=270 op=UNLOAD Dec 12 17:29:41.661000 audit[6050]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3623 pid=6050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:41.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566376530336161616239356535373530393366646461376464363334 Dec 12 17:29:41.661000 audit: BPF prog-id=271 op=LOAD Dec 12 17:29:41.661000 audit[6050]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3623 pid=6050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:41.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566376530336161616239356535373530393366646461376464363334 Dec 12 17:29:41.663000 audit: BPF prog-id=272 op=LOAD Dec 12 17:29:41.663000 audit[6050]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3623 pid=6050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:41.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566376530336161616239356535373530393366646461376464363334 Dec 12 17:29:41.663000 audit: BPF prog-id=272 op=UNLOAD Dec 12 17:29:41.663000 audit[6050]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3623 pid=6050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:41.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566376530336161616239356535373530393366646461376464363334 Dec 12 17:29:41.663000 audit: BPF prog-id=271 op=UNLOAD Dec 12 17:29:41.663000 audit[6050]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3623 pid=6050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:41.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566376530336161616239356535373530393366646461376464363334 Dec 12 17:29:41.663000 audit: BPF prog-id=273 op=LOAD Dec 12 17:29:41.663000 audit[6050]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3623 pid=6050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:41.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566376530336161616239356535373530393366646461376464363334 Dec 12 17:29:41.678000 audit: BPF prog-id=274 op=LOAD Dec 12 17:29:41.680000 audit: BPF prog-id=275 op=LOAD Dec 12 17:29:41.680000 audit[6059]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3017 pid=6059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:41.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537346330316636626534303762636637333865623161386330383935 Dec 12 17:29:41.680000 audit: BPF prog-id=275 op=UNLOAD Dec 12 17:29:41.680000 audit[6059]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3017 pid=6059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:41.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537346330316636626534303762636637333865623161386330383935 Dec 12 17:29:41.680000 audit: BPF prog-id=276 op=LOAD Dec 12 17:29:41.680000 audit[6059]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3017 pid=6059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:41.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537346330316636626534303762636637333865623161386330383935 Dec 12 17:29:41.681000 audit: BPF prog-id=277 op=LOAD Dec 12 17:29:41.681000 audit[6059]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3017 pid=6059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:41.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537346330316636626534303762636637333865623161386330383935 Dec 12 17:29:41.681000 audit: BPF prog-id=277 op=UNLOAD Dec 12 17:29:41.681000 audit[6059]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3017 pid=6059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:41.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537346330316636626534303762636637333865623161386330383935 Dec 12 17:29:41.681000 audit: BPF prog-id=276 op=UNLOAD Dec 12 17:29:41.681000 audit[6059]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3017 pid=6059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:41.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537346330316636626534303762636637333865623161386330383935 Dec 12 17:29:41.681000 audit: BPF prog-id=278 op=LOAD Dec 12 17:29:41.681000 audit[6059]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3017 pid=6059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:41.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537346330316636626534303762636637333865623161386330383935 Dec 12 17:29:41.748603 containerd[2012]: time="2025-12-12T17:29:41.747961080Z" level=info msg="StartContainer for \"ef7e03aaab95e575093fdda7dd63469a28615acaa3bbb36d88634822c7f6cbcd\" returns successfully" Dec 12 17:29:41.769875 containerd[2012]: time="2025-12-12T17:29:41.769691328Z" level=info msg="StartContainer for \"e74c01f6be407bcf738eb1a8c089576683a243ccf66a5212adb4a26eee6a75ee\" returns successfully" Dec 12 17:29:45.424603 kubelet[3325]: E1212 17:29:45.424527 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcd4b4759-jhpmk" podUID="146b6197-b092-48f2-948f-08d710a51bd7" Dec 12 17:29:47.055065 systemd[1]: cri-containerd-ea8054c75408e125dc1bda354a1ffd54aa0f64299103a50a596ab2e98b304323.scope: Deactivated successfully. Dec 12 17:29:47.056629 systemd[1]: cri-containerd-ea8054c75408e125dc1bda354a1ffd54aa0f64299103a50a596ab2e98b304323.scope: Consumed 5.428s CPU time, 21.9M memory peak. Dec 12 17:29:47.063349 kernel: kauditd_printk_skb: 40 callbacks suppressed Dec 12 17:29:47.063469 kernel: audit: type=1334 audit(1765560587.058:965): prog-id=279 op=LOAD Dec 12 17:29:47.058000 audit: BPF prog-id=279 op=LOAD Dec 12 17:29:47.067882 kernel: audit: type=1334 audit(1765560587.058:966): prog-id=100 op=UNLOAD Dec 12 17:29:47.068016 kernel: audit: type=1334 audit(1765560587.063:967): prog-id=115 op=UNLOAD Dec 12 17:29:47.058000 audit: BPF prog-id=100 op=UNLOAD Dec 12 17:29:47.068188 kernel: audit: type=1334 audit(1765560587.063:968): prog-id=119 op=UNLOAD Dec 12 17:29:47.063000 audit: BPF prog-id=115 op=UNLOAD Dec 12 17:29:47.063000 audit: BPF prog-id=119 op=UNLOAD Dec 12 17:29:47.071925 containerd[2012]: time="2025-12-12T17:29:47.071073135Z" level=info msg="received container exit event container_id:\"ea8054c75408e125dc1bda354a1ffd54aa0f64299103a50a596ab2e98b304323\" id:\"ea8054c75408e125dc1bda354a1ffd54aa0f64299103a50a596ab2e98b304323\" pid:3166 exit_status:1 exited_at:{seconds:1765560587 nanos:70376199}" Dec 12 17:29:47.128418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea8054c75408e125dc1bda354a1ffd54aa0f64299103a50a596ab2e98b304323-rootfs.mount: Deactivated successfully. Dec 12 17:29:47.540829 kubelet[3325]: I1212 17:29:47.540773 3325 scope.go:117] "RemoveContainer" containerID="ea8054c75408e125dc1bda354a1ffd54aa0f64299103a50a596ab2e98b304323" Dec 12 17:29:47.544425 containerd[2012]: time="2025-12-12T17:29:47.544344065Z" level=info msg="CreateContainer within sandbox \"dbe49d6d26c660c019e017d67daf62c8bd662a9e01223bd97e5c5523cb364286\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 12 17:29:47.563290 containerd[2012]: time="2025-12-12T17:29:47.563224625Z" level=info msg="Container 84d135f54ec8ae836ca1bf50630cf6d0a51f94a0f0d68a236c7f830a861c5953: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:47.581649 containerd[2012]: time="2025-12-12T17:29:47.581573009Z" level=info msg="CreateContainer within sandbox \"dbe49d6d26c660c019e017d67daf62c8bd662a9e01223bd97e5c5523cb364286\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"84d135f54ec8ae836ca1bf50630cf6d0a51f94a0f0d68a236c7f830a861c5953\"" Dec 12 17:29:47.582804 containerd[2012]: time="2025-12-12T17:29:47.582754505Z" level=info msg="StartContainer for \"84d135f54ec8ae836ca1bf50630cf6d0a51f94a0f0d68a236c7f830a861c5953\"" Dec 12 17:29:47.585209 containerd[2012]: time="2025-12-12T17:29:47.585151793Z" level=info msg="connecting to shim 84d135f54ec8ae836ca1bf50630cf6d0a51f94a0f0d68a236c7f830a861c5953" address="unix:///run/containerd/s/6dbf6cfa097d94462b59d547d4a17d4b9eb8ccfb5243212a732bc242ba9c542e" protocol=ttrpc version=3 Dec 12 17:29:47.627205 systemd[1]: Started cri-containerd-84d135f54ec8ae836ca1bf50630cf6d0a51f94a0f0d68a236c7f830a861c5953.scope - libcontainer container 84d135f54ec8ae836ca1bf50630cf6d0a51f94a0f0d68a236c7f830a861c5953. Dec 12 17:29:47.654000 audit: BPF prog-id=280 op=LOAD Dec 12 17:29:47.659834 kernel: audit: type=1334 audit(1765560587.654:969): prog-id=280 op=LOAD Dec 12 17:29:47.659997 kernel: audit: type=1334 audit(1765560587.656:970): prog-id=281 op=LOAD Dec 12 17:29:47.656000 audit: BPF prog-id=281 op=LOAD Dec 12 17:29:47.667770 kernel: audit: type=1300 audit(1765560587.656:970): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3019 pid=6130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:47.656000 audit[6130]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3019 pid=6130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:47.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834643133356635346563386165383336636131626635303633306366 Dec 12 17:29:47.674342 kernel: audit: type=1327 audit(1765560587.656:970): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834643133356635346563386165383336636131626635303633306366 Dec 12 17:29:47.656000 audit: BPF prog-id=281 op=UNLOAD Dec 12 17:29:47.677098 kernel: audit: type=1334 audit(1765560587.656:971): prog-id=281 op=UNLOAD Dec 12 17:29:47.656000 audit[6130]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3019 pid=6130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:47.683201 kernel: audit: type=1300 audit(1765560587.656:971): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3019 pid=6130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:47.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834643133356635346563386165383336636131626635303633306366 Dec 12 17:29:47.656000 audit: BPF prog-id=282 op=LOAD Dec 12 17:29:47.656000 audit[6130]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3019 pid=6130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:47.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834643133356635346563386165383336636131626635303633306366 Dec 12 17:29:47.656000 audit: BPF prog-id=283 op=LOAD Dec 12 17:29:47.656000 audit[6130]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3019 pid=6130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:47.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834643133356635346563386165383336636131626635303633306366 Dec 12 17:29:47.656000 audit: BPF prog-id=283 op=UNLOAD Dec 12 17:29:47.656000 audit[6130]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3019 pid=6130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:47.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834643133356635346563386165383336636131626635303633306366 Dec 12 17:29:47.656000 audit: BPF prog-id=282 op=UNLOAD Dec 12 17:29:47.656000 audit[6130]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3019 pid=6130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:47.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834643133356635346563386165383336636131626635303633306366 Dec 12 17:29:47.656000 audit: BPF prog-id=284 op=LOAD Dec 12 17:29:47.656000 audit[6130]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3019 pid=6130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:47.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834643133356635346563386165383336636131626635303633306366 Dec 12 17:29:47.742790 containerd[2012]: time="2025-12-12T17:29:47.742706826Z" level=info msg="StartContainer for \"84d135f54ec8ae836ca1bf50630cf6d0a51f94a0f0d68a236c7f830a861c5953\" returns successfully" Dec 12 17:29:48.425874 kubelet[3325]: E1212 17:29:48.425810 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-bklww" podUID="334ae9c1-5a14-4018-8c8f-986d294ed109" Dec 12 17:29:48.709673 kubelet[3325]: E1212 17:29:48.709422 3325 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-55?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:29:50.424356 kubelet[3325]: E1212 17:29:50.424277 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-ln8dl" podUID="df70d9f4-d51b-472c-b1f4-6f65f02c50aa" Dec 12 17:29:51.426838 kubelet[3325]: E1212 17:29:51.426731 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hb6pw" podUID="8797b6d6-7a5e-4865-91c2-2bd3d90f57cf" Dec 12 17:29:51.426838 kubelet[3325]: E1212 17:29:51.426758 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c927k" podUID="87213e25-1478-4b01-ac6c-54452f7f57dd" Dec 12 17:29:52.424741 kubelet[3325]: E1212 17:29:52.424673 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b7498b7d9-pzhlv" podUID="baecc54d-00d4-4fc9-9061-9d5f893dca48" Dec 12 17:29:53.256609 systemd[1]: cri-containerd-ef7e03aaab95e575093fdda7dd63469a28615acaa3bbb36d88634822c7f6cbcd.scope: Deactivated successfully. Dec 12 17:29:53.260472 containerd[2012]: time="2025-12-12T17:29:53.260288985Z" level=info msg="received container exit event container_id:\"ef7e03aaab95e575093fdda7dd63469a28615acaa3bbb36d88634822c7f6cbcd\" id:\"ef7e03aaab95e575093fdda7dd63469a28615acaa3bbb36d88634822c7f6cbcd\" pid:6074 exit_status:1 exited_at:{seconds:1765560593 nanos:259752549}" Dec 12 17:29:53.264978 kernel: kauditd_printk_skb: 16 callbacks suppressed Dec 12 17:29:53.265083 kernel: audit: type=1334 audit(1765560593.260:977): prog-id=269 op=UNLOAD Dec 12 17:29:53.260000 audit: BPF prog-id=269 op=UNLOAD Dec 12 17:29:53.260000 audit: BPF prog-id=273 op=UNLOAD Dec 12 17:29:53.267414 kernel: audit: type=1334 audit(1765560593.260:978): prog-id=273 op=UNLOAD Dec 12 17:29:53.308278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef7e03aaab95e575093fdda7dd63469a28615acaa3bbb36d88634822c7f6cbcd-rootfs.mount: Deactivated successfully. Dec 12 17:29:53.425756 kubelet[3325]: E1212 17:29:53.425667 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cbdd468db-lnm8s" podUID="3ec49c85-5274-4ed7-b914-fc08a271b46e" Dec 12 17:29:53.574255 kubelet[3325]: I1212 17:29:53.574043 3325 scope.go:117] "RemoveContainer" containerID="d8102fc62d1086a5601ac09860a8b0f08fce17e17114be647f75d26fa1b2ffc3" Dec 12 17:29:53.575185 kubelet[3325]: I1212 17:29:53.575145 3325 scope.go:117] "RemoveContainer" containerID="ef7e03aaab95e575093fdda7dd63469a28615acaa3bbb36d88634822c7f6cbcd" Dec 12 17:29:53.575681 kubelet[3325]: E1212 17:29:53.575548 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-65cdcdfd6d-kk54b_tigera-operator(8b694eca-2a96-4f59-951f-b83b88768a93)\"" pod="tigera-operator/tigera-operator-65cdcdfd6d-kk54b" podUID="8b694eca-2a96-4f59-951f-b83b88768a93" Dec 12 17:29:53.578674 containerd[2012]: time="2025-12-12T17:29:53.578619767Z" level=info msg="RemoveContainer for \"d8102fc62d1086a5601ac09860a8b0f08fce17e17114be647f75d26fa1b2ffc3\"" Dec 12 17:29:53.587686 containerd[2012]: time="2025-12-12T17:29:53.587571647Z" level=info msg="RemoveContainer for \"d8102fc62d1086a5601ac09860a8b0f08fce17e17114be647f75d26fa1b2ffc3\" returns successfully" Dec 12 17:29:58.710924 kubelet[3325]: E1212 17:29:58.710587 3325 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-55?timeout=10s\": context deadline exceeded" Dec 12 17:30:00.424749 kubelet[3325]: E1212 17:30:00.424669 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcd4b4759-jhpmk" podUID="146b6197-b092-48f2-948f-08d710a51bd7" Dec 12 17:30:01.425579 kubelet[3325]: E1212 17:30:01.425241 3325 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b6bc6ffc4-bklww" podUID="334ae9c1-5a14-4018-8c8f-986d294ed109"