Jan 13 23:39:14.946892 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 13 23:39:14.956577 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Jan 13 21:43:11 -00 2026 Jan 13 23:39:14.956606 kernel: KASLR disabled due to lack of seed Jan 13 23:39:14.956624 kernel: efi: EFI v2.7 by EDK II Jan 13 23:39:14.956640 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78557598 Jan 13 23:39:14.956656 kernel: secureboot: Secure boot disabled Jan 13 23:39:14.956675 kernel: ACPI: Early table checksum verification disabled Jan 13 23:39:14.956691 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 13 23:39:14.956708 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 13 23:39:14.956728 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 23:39:14.956745 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 13 23:39:14.956761 kernel: ACPI: FACS 0x0000000078630000 000040 Jan 13 23:39:14.956777 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 23:39:14.956794 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 13 23:39:14.956817 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 13 23:39:14.956835 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 13 23:39:14.956852 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 23:39:14.956869 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 13 23:39:14.956887 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 13 23:39:14.956925 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 13 23:39:14.956945 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 13 23:39:14.956963 kernel: printk: legacy bootconsole [uart0] enabled Jan 13 23:39:14.956980 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 13 23:39:14.956998 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 23:39:14.957021 kernel: NODE_DATA(0) allocated [mem 0x4b584ea00-0x4b5855fff] Jan 13 23:39:14.957038 kernel: Zone ranges: Jan 13 23:39:14.957055 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 23:39:14.957072 kernel: DMA32 empty Jan 13 23:39:14.957089 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 13 23:39:14.957105 kernel: Device empty Jan 13 23:39:14.957122 kernel: Movable zone start for each node Jan 13 23:39:14.957139 kernel: Early memory node ranges Jan 13 23:39:14.957156 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 13 23:39:14.957173 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 13 23:39:14.957190 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 13 23:39:14.957207 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 13 23:39:14.957228 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 13 23:39:14.957244 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 13 23:39:14.957261 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 13 23:39:14.957278 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 13 23:39:14.957302 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 23:39:14.957323 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 13 23:39:14.957341 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Jan 13 23:39:14.957359 kernel: psci: probing for conduit method from ACPI. Jan 13 23:39:14.957377 kernel: psci: PSCIv1.0 detected in firmware. Jan 13 23:39:14.957395 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 23:39:14.957413 kernel: psci: Trusted OS migration not required Jan 13 23:39:14.957431 kernel: psci: SMC Calling Convention v1.1 Jan 13 23:39:14.957448 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 13 23:39:14.957466 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 13 23:39:14.957488 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 13 23:39:14.957507 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 23:39:14.957524 kernel: Detected PIPT I-cache on CPU0 Jan 13 23:39:14.957542 kernel: CPU features: detected: GIC system register CPU interface Jan 13 23:39:14.957560 kernel: CPU features: detected: Spectre-v2 Jan 13 23:39:14.957578 kernel: CPU features: detected: Spectre-v3a Jan 13 23:39:14.957595 kernel: CPU features: detected: Spectre-BHB Jan 13 23:39:14.957613 kernel: CPU features: detected: ARM erratum 1742098 Jan 13 23:39:14.957631 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 13 23:39:14.957648 kernel: alternatives: applying boot alternatives Jan 13 23:39:14.957668 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a2e92265a189403c21ae2a2ae9e6d4fed0782e0e430fbcb369a7bb0db156274f Jan 13 23:39:14.957691 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 23:39:14.957709 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 23:39:14.957727 kernel: Fallback order for Node 0: 0 Jan 13 23:39:14.957745 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jan 13 23:39:14.957764 kernel: Policy zone: Normal Jan 13 23:39:14.957781 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 23:39:14.957799 kernel: software IO TLB: area num 2. Jan 13 23:39:14.957818 kernel: software IO TLB: mapped [mem 0x000000006f800000-0x0000000073800000] (64MB) Jan 13 23:39:14.957836 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 23:39:14.957854 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 23:39:14.957878 kernel: rcu: RCU event tracing is enabled. Jan 13 23:39:14.957913 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 23:39:14.957937 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 23:39:14.957956 kernel: Tracing variant of Tasks RCU enabled. Jan 13 23:39:14.957974 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 23:39:14.957992 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 23:39:14.958010 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 23:39:14.958028 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 23:39:14.958046 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 23:39:14.958064 kernel: GICv3: 96 SPIs implemented Jan 13 23:39:14.958082 kernel: GICv3: 0 Extended SPIs implemented Jan 13 23:39:14.958106 kernel: Root IRQ handler: gic_handle_irq Jan 13 23:39:14.958125 kernel: GICv3: GICv3 features: 16 PPIs Jan 13 23:39:14.958143 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 13 23:39:14.958161 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 13 23:39:14.958178 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 13 23:39:14.958196 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 23:39:14.958214 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jan 13 23:39:14.958232 kernel: GICv3: using LPI property table @0x0000000400110000 Jan 13 23:39:14.958250 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 13 23:39:14.958268 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jan 13 23:39:14.958285 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 23:39:14.958307 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 13 23:39:14.958325 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 13 23:39:14.958344 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 13 23:39:14.958362 kernel: Console: colour dummy device 80x25 Jan 13 23:39:14.958381 kernel: printk: legacy console [tty1] enabled Jan 13 23:39:14.958400 kernel: ACPI: Core revision 20240827 Jan 13 23:39:14.958419 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 13 23:39:14.958438 kernel: pid_max: default: 32768 minimum: 301 Jan 13 23:39:14.958461 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 13 23:39:14.958480 kernel: landlock: Up and running. Jan 13 23:39:14.958499 kernel: SELinux: Initializing. Jan 13 23:39:14.958518 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 23:39:14.958537 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 23:39:14.958556 kernel: rcu: Hierarchical SRCU implementation. Jan 13 23:39:14.958575 kernel: rcu: Max phase no-delay instances is 400. Jan 13 23:39:14.958594 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 13 23:39:14.958617 kernel: Remapping and enabling EFI services. Jan 13 23:39:14.958636 kernel: smp: Bringing up secondary CPUs ... Jan 13 23:39:14.958655 kernel: Detected PIPT I-cache on CPU1 Jan 13 23:39:14.958673 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 13 23:39:14.958693 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jan 13 23:39:14.958711 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 13 23:39:14.958730 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 23:39:14.958753 kernel: SMP: Total of 2 processors activated. Jan 13 23:39:14.958771 kernel: CPU: All CPU(s) started at EL1 Jan 13 23:39:14.958800 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 23:39:14.958824 kernel: CPU features: detected: 32-bit EL1 Support Jan 13 23:39:14.958843 kernel: CPU features: detected: CRC32 instructions Jan 13 23:39:14.958862 kernel: alternatives: applying system-wide alternatives Jan 13 23:39:14.958884 kernel: Memory: 3823340K/4030464K available (11200K kernel code, 2458K rwdata, 9092K rodata, 12480K init, 1038K bss, 185776K reserved, 16384K cma-reserved) Jan 13 23:39:14.967925 kernel: devtmpfs: initialized Jan 13 23:39:14.967985 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 23:39:14.968007 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 23:39:14.968027 kernel: 23632 pages in range for non-PLT usage Jan 13 23:39:14.968048 kernel: 515152 pages in range for PLT usage Jan 13 23:39:14.968067 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 23:39:14.968091 kernel: SMBIOS 3.0.0 present. Jan 13 23:39:14.968110 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 13 23:39:14.968129 kernel: DMI: Memory slots populated: 0/0 Jan 13 23:39:14.968149 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 23:39:14.968168 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 23:39:14.968188 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 23:39:14.968207 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 23:39:14.968231 kernel: audit: initializing netlink subsys (disabled) Jan 13 23:39:14.968251 kernel: audit: type=2000 audit(0.224:1): state=initialized audit_enabled=0 res=1 Jan 13 23:39:14.968270 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 23:39:14.968289 kernel: cpuidle: using governor menu Jan 13 23:39:14.968323 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 23:39:14.968358 kernel: ASID allocator initialised with 65536 entries Jan 13 23:39:14.968380 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 23:39:14.968407 kernel: Serial: AMBA PL011 UART driver Jan 13 23:39:14.968426 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 23:39:14.968446 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 23:39:14.968465 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 23:39:14.968484 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 23:39:14.968503 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 23:39:14.968523 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 23:39:14.968546 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 23:39:14.968565 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 23:39:14.968584 kernel: ACPI: Added _OSI(Module Device) Jan 13 23:39:14.968604 kernel: ACPI: Added _OSI(Processor Device) Jan 13 23:39:14.968623 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 23:39:14.968642 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 23:39:14.968661 kernel: ACPI: Interpreter enabled Jan 13 23:39:14.968684 kernel: ACPI: Using GIC for interrupt routing Jan 13 23:39:14.968704 kernel: ACPI: MCFG table detected, 1 entries Jan 13 23:39:14.968723 kernel: ACPI: CPU0 has been hot-added Jan 13 23:39:14.968742 kernel: ACPI: CPU1 has been hot-added Jan 13 23:39:14.968761 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 13 23:39:14.969145 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 23:39:14.969408 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 23:39:14.969669 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 23:39:14.969948 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 13 23:39:14.970209 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 13 23:39:14.970235 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 13 23:39:14.970255 kernel: acpiphp: Slot [1] registered Jan 13 23:39:14.970274 kernel: acpiphp: Slot [2] registered Jan 13 23:39:14.970299 kernel: acpiphp: Slot [3] registered Jan 13 23:39:14.970319 kernel: acpiphp: Slot [4] registered Jan 13 23:39:14.970338 kernel: acpiphp: Slot [5] registered Jan 13 23:39:14.970357 kernel: acpiphp: Slot [6] registered Jan 13 23:39:14.970376 kernel: acpiphp: Slot [7] registered Jan 13 23:39:14.970395 kernel: acpiphp: Slot [8] registered Jan 13 23:39:14.970415 kernel: acpiphp: Slot [9] registered Jan 13 23:39:14.970434 kernel: acpiphp: Slot [10] registered Jan 13 23:39:14.970457 kernel: acpiphp: Slot [11] registered Jan 13 23:39:14.970476 kernel: acpiphp: Slot [12] registered Jan 13 23:39:14.970495 kernel: acpiphp: Slot [13] registered Jan 13 23:39:14.970515 kernel: acpiphp: Slot [14] registered Jan 13 23:39:14.970534 kernel: acpiphp: Slot [15] registered Jan 13 23:39:14.970553 kernel: acpiphp: Slot [16] registered Jan 13 23:39:14.970573 kernel: acpiphp: Slot [17] registered Jan 13 23:39:14.970596 kernel: acpiphp: Slot [18] registered Jan 13 23:39:14.970615 kernel: acpiphp: Slot [19] registered Jan 13 23:39:14.970635 kernel: acpiphp: Slot [20] registered Jan 13 23:39:14.970654 kernel: acpiphp: Slot [21] registered Jan 13 23:39:14.970673 kernel: acpiphp: Slot [22] registered Jan 13 23:39:14.970692 kernel: acpiphp: Slot [23] registered Jan 13 23:39:14.970712 kernel: acpiphp: Slot [24] registered Jan 13 23:39:14.970734 kernel: acpiphp: Slot [25] registered Jan 13 23:39:14.970754 kernel: acpiphp: Slot [26] registered Jan 13 23:39:14.970773 kernel: acpiphp: Slot [27] registered Jan 13 23:39:14.970792 kernel: acpiphp: Slot [28] registered Jan 13 23:39:14.970812 kernel: acpiphp: Slot [29] registered Jan 13 23:39:14.970831 kernel: acpiphp: Slot [30] registered Jan 13 23:39:14.970850 kernel: acpiphp: Slot [31] registered Jan 13 23:39:14.970869 kernel: PCI host bridge to bus 0000:00 Jan 13 23:39:14.971146 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 13 23:39:14.971380 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 23:39:14.971610 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 13 23:39:14.971838 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 13 23:39:14.972182 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jan 13 23:39:14.972465 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jan 13 23:39:14.972720 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jan 13 23:39:14.973047 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jan 13 23:39:14.973312 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jan 13 23:39:14.973565 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 23:39:14.973844 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jan 13 23:39:14.974134 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jan 13 23:39:14.974388 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jan 13 23:39:14.974638 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jan 13 23:39:14.974889 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 23:39:14.975147 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 13 23:39:14.975381 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 23:39:14.975608 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 13 23:39:14.975634 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 23:39:14.975654 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 23:39:14.975674 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 23:39:14.975693 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 23:39:14.975713 kernel: iommu: Default domain type: Translated Jan 13 23:39:14.975737 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 23:39:14.975756 kernel: efivars: Registered efivars operations Jan 13 23:39:14.975775 kernel: vgaarb: loaded Jan 13 23:39:14.975794 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 23:39:14.975813 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 23:39:14.975832 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 23:39:14.975851 kernel: pnp: PnP ACPI init Jan 13 23:39:14.976789 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 13 23:39:14.976825 kernel: pnp: PnP ACPI: found 1 devices Jan 13 23:39:14.976845 kernel: NET: Registered PF_INET protocol family Jan 13 23:39:14.976865 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 23:39:14.976885 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 23:39:14.976929 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 23:39:14.976951 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 23:39:14.976979 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 23:39:14.976999 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 23:39:14.977018 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 23:39:14.977037 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 23:39:14.977056 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 23:39:14.977076 kernel: PCI: CLS 0 bytes, default 64 Jan 13 23:39:14.977094 kernel: kvm [1]: HYP mode not available Jan 13 23:39:14.977118 kernel: Initialise system trusted keyrings Jan 13 23:39:14.977137 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 23:39:14.977156 kernel: Key type asymmetric registered Jan 13 23:39:14.977175 kernel: Asymmetric key parser 'x509' registered Jan 13 23:39:14.977194 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 13 23:39:14.977214 kernel: io scheduler mq-deadline registered Jan 13 23:39:14.977233 kernel: io scheduler kyber registered Jan 13 23:39:14.977256 kernel: io scheduler bfq registered Jan 13 23:39:14.977542 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 13 23:39:14.977570 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 23:39:14.977591 kernel: ACPI: button: Power Button [PWRB] Jan 13 23:39:14.977610 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 13 23:39:14.977629 kernel: ACPI: button: Sleep Button [SLPB] Jan 13 23:39:14.977653 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 23:39:14.977674 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 23:39:14.977952 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 13 23:39:14.977980 kernel: printk: legacy console [ttyS0] disabled Jan 13 23:39:14.978000 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 13 23:39:14.978019 kernel: printk: legacy console [ttyS0] enabled Jan 13 23:39:14.978038 kernel: printk: legacy bootconsole [uart0] disabled Jan 13 23:39:14.978062 kernel: thunder_xcv, ver 1.0 Jan 13 23:39:14.978082 kernel: thunder_bgx, ver 1.0 Jan 13 23:39:14.978101 kernel: nicpf, ver 1.0 Jan 13 23:39:14.978120 kernel: nicvf, ver 1.0 Jan 13 23:39:14.978401 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 23:39:14.981507 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-13T23:39:11 UTC (1768347551) Jan 13 23:39:14.981537 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 23:39:14.981567 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jan 13 23:39:14.981587 kernel: NET: Registered PF_INET6 protocol family Jan 13 23:39:14.981606 kernel: watchdog: NMI not fully supported Jan 13 23:39:14.981625 kernel: watchdog: Hard watchdog permanently disabled Jan 13 23:39:14.981645 kernel: Segment Routing with IPv6 Jan 13 23:39:14.981665 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 23:39:14.981684 kernel: NET: Registered PF_PACKET protocol family Jan 13 23:39:14.981707 kernel: Key type dns_resolver registered Jan 13 23:39:14.981727 kernel: registered taskstats version 1 Jan 13 23:39:14.981746 kernel: Loading compiled-in X.509 certificates Jan 13 23:39:14.981766 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: 61f104a5e4017e43c6bf0c9744e6a522053d7383' Jan 13 23:39:14.981785 kernel: Demotion targets for Node 0: null Jan 13 23:39:14.981804 kernel: Key type .fscrypt registered Jan 13 23:39:14.981823 kernel: Key type fscrypt-provisioning registered Jan 13 23:39:14.981846 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 23:39:14.981865 kernel: ima: Allocated hash algorithm: sha1 Jan 13 23:39:14.981885 kernel: ima: No architecture policies found Jan 13 23:39:14.981925 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 23:39:14.981947 kernel: clk: Disabling unused clocks Jan 13 23:39:14.981966 kernel: PM: genpd: Disabling unused power domains Jan 13 23:39:14.981985 kernel: Freeing unused kernel memory: 12480K Jan 13 23:39:14.982004 kernel: Run /init as init process Jan 13 23:39:14.982029 kernel: with arguments: Jan 13 23:39:14.982048 kernel: /init Jan 13 23:39:14.982066 kernel: with environment: Jan 13 23:39:14.982085 kernel: HOME=/ Jan 13 23:39:14.982104 kernel: TERM=linux Jan 13 23:39:14.982124 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 23:39:14.982343 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 23:39:14.982543 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 23:39:14.982570 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 23:39:14.982590 kernel: GPT:25804799 != 33554431 Jan 13 23:39:14.982609 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 23:39:14.982628 kernel: GPT:25804799 != 33554431 Jan 13 23:39:14.982647 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 23:39:14.982671 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 23:39:14.982690 kernel: SCSI subsystem initialized Jan 13 23:39:14.982710 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 23:39:14.982729 kernel: device-mapper: uevent: version 1.0.3 Jan 13 23:39:14.982749 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 13 23:39:14.982768 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 13 23:39:14.982788 kernel: raid6: neonx8 gen() 6500 MB/s Jan 13 23:39:14.982811 kernel: raid6: neonx4 gen() 6477 MB/s Jan 13 23:39:14.982830 kernel: raid6: neonx2 gen() 5354 MB/s Jan 13 23:39:14.982849 kernel: raid6: neonx1 gen() 3921 MB/s Jan 13 23:39:14.982868 kernel: raid6: int64x8 gen() 3616 MB/s Jan 13 23:39:14.982888 kernel: raid6: int64x4 gen() 3697 MB/s Jan 13 23:39:14.982926 kernel: raid6: int64x2 gen() 3540 MB/s Jan 13 23:39:14.982947 kernel: raid6: int64x1 gen() 2745 MB/s Jan 13 23:39:14.982971 kernel: raid6: using algorithm neonx8 gen() 6500 MB/s Jan 13 23:39:14.982991 kernel: raid6: .... xor() 4757 MB/s, rmw enabled Jan 13 23:39:14.983010 kernel: raid6: using neon recovery algorithm Jan 13 23:39:14.983029 kernel: xor: measuring software checksum speed Jan 13 23:39:14.983048 kernel: 8regs : 12895 MB/sec Jan 13 23:39:14.983067 kernel: 32regs : 13005 MB/sec Jan 13 23:39:14.983087 kernel: arm64_neon : 8909 MB/sec Jan 13 23:39:14.983109 kernel: xor: using function: 32regs (13005 MB/sec) Jan 13 23:39:14.983129 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 23:39:14.983148 kernel: BTRFS: device fsid 96ce121f-260d-446f-a0e2-a59fdf56d58c devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (221) Jan 13 23:39:14.983168 kernel: BTRFS info (device dm-0): first mount of filesystem 96ce121f-260d-446f-a0e2-a59fdf56d58c Jan 13 23:39:14.983187 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 23:39:14.983207 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 23:39:14.983226 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 23:39:14.983249 kernel: BTRFS info (device dm-0): enabling free space tree Jan 13 23:39:14.983268 kernel: loop: module loaded Jan 13 23:39:14.983287 kernel: loop0: detected capacity change from 0 to 91840 Jan 13 23:39:14.983306 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 23:39:14.983328 systemd[1]: Successfully made /usr/ read-only. Jan 13 23:39:14.983353 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 13 23:39:14.983379 systemd[1]: Detected virtualization amazon. Jan 13 23:39:14.983400 systemd[1]: Detected architecture arm64. Jan 13 23:39:14.983420 systemd[1]: Running in initrd. Jan 13 23:39:14.983440 systemd[1]: No hostname configured, using default hostname. Jan 13 23:39:14.983461 systemd[1]: Hostname set to . Jan 13 23:39:14.983482 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 13 23:39:14.983502 systemd[1]: Queued start job for default target initrd.target. Jan 13 23:39:14.983527 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 13 23:39:14.983547 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 23:39:14.983568 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 23:39:14.983590 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 23:39:14.983612 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 23:39:14.983652 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 23:39:14.983675 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 23:39:14.983696 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 23:39:14.983718 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 23:39:14.983739 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 13 23:39:14.983764 systemd[1]: Reached target paths.target - Path Units. Jan 13 23:39:14.983786 systemd[1]: Reached target slices.target - Slice Units. Jan 13 23:39:14.983807 systemd[1]: Reached target swap.target - Swaps. Jan 13 23:39:14.983828 systemd[1]: Reached target timers.target - Timer Units. Jan 13 23:39:14.983850 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 23:39:14.983871 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 23:39:14.983892 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 13 23:39:14.983952 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 23:39:14.983976 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 13 23:39:14.983998 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 23:39:14.984019 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 23:39:14.984041 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 23:39:14.984062 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 23:39:14.984083 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 23:39:14.984110 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 23:39:14.984132 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 23:39:14.984153 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 23:39:14.984175 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 13 23:39:14.984197 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 23:39:14.984218 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 23:39:14.984239 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 23:39:14.984265 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 23:39:14.984287 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 23:39:14.984313 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 23:39:14.984335 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 23:39:14.984357 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 23:39:14.984378 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 23:39:14.984398 kernel: Bridge firewalling registered Jan 13 23:39:14.984423 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 23:39:14.984445 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 23:39:14.984508 systemd-journald[360]: Collecting audit messages is enabled. Jan 13 23:39:14.984557 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 23:39:14.984580 kernel: audit: type=1130 audit(1768347554.946:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:14.984600 kernel: audit: type=1334 audit(1768347554.955:3): prog-id=6 op=LOAD Jan 13 23:39:14.984621 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 23:39:14.984643 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 23:39:14.984664 kernel: audit: type=1130 audit(1768347554.980:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:14.984688 systemd-journald[360]: Journal started Jan 13 23:39:14.984725 systemd-journald[360]: Runtime Journal (/run/log/journal/ec2e741dd8cd12b757bcf4d3d0c95487) is 8M, max 75.3M, 67.3M free. Jan 13 23:39:14.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:14.955000 audit: BPF prog-id=6 op=LOAD Jan 13 23:39:14.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:14.891790 systemd-modules-load[362]: Inserted module 'br_netfilter' Jan 13 23:39:14.991045 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:39:14.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:14.999795 kernel: audit: type=1130 audit(1768347554.992:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:15.002193 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 23:39:15.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:15.008959 kernel: audit: type=1130 audit(1768347555.000:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:15.011143 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 23:39:15.015794 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 23:39:15.037291 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 23:39:15.069954 systemd-tmpfiles[386]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 13 23:39:15.087131 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 23:39:15.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:15.100941 kernel: audit: type=1130 audit(1768347555.088:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:15.104636 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 23:39:15.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:15.122003 kernel: audit: type=1130 audit(1768347555.114:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:15.128986 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 23:39:15.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:15.138937 kernel: audit: type=1130 audit(1768347555.130:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:15.211648 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 23:39:15.229581 systemd-resolved[374]: Positive Trust Anchors: Jan 13 23:39:15.230028 systemd-resolved[374]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 23:39:15.230037 systemd-resolved[374]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 13 23:39:15.230099 systemd-resolved[374]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 23:39:15.294111 dracut-cmdline[400]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a2e92265a189403c21ae2a2ae9e6d4fed0782e0e430fbcb369a7bb0db156274f Jan 13 23:39:15.517934 kernel: random: crng init done Jan 13 23:39:15.528711 systemd-resolved[374]: Defaulting to hostname 'linux'. Jan 13 23:39:15.539179 kernel: Loading iSCSI transport class v2.0-870. Jan 13 23:39:15.539217 kernel: audit: type=1130 audit(1768347555.530:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:15.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:15.530939 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 23:39:15.531430 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 23:39:15.599157 kernel: iscsi: registered transport (tcp) Jan 13 23:39:15.622362 kernel: iscsi: registered transport (qla4xxx) Jan 13 23:39:15.622465 kernel: QLogic iSCSI HBA Driver Jan 13 23:39:15.662055 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 23:39:15.701767 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 23:39:15.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:15.710756 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 23:39:15.718145 kernel: audit: type=1130 audit(1768347555.707:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:15.790408 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 23:39:15.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:15.793744 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 23:39:15.803023 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 23:39:15.875474 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 23:39:15.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:15.881000 audit: BPF prog-id=7 op=LOAD Jan 13 23:39:15.881000 audit: BPF prog-id=8 op=LOAD Jan 13 23:39:15.884886 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 23:39:15.946473 systemd-udevd[644]: Using default interface naming scheme 'v257'. Jan 13 23:39:15.965409 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 23:39:15.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:15.971428 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 23:39:16.028191 dracut-pre-trigger[702]: rd.md=0: removing MD RAID activation Jan 13 23:39:16.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:16.044000 audit: BPF prog-id=9 op=LOAD Jan 13 23:39:16.041713 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 23:39:16.051718 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 23:39:16.099993 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 23:39:16.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:16.109176 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 23:39:16.154714 systemd-networkd[750]: lo: Link UP Jan 13 23:39:16.155521 systemd-networkd[750]: lo: Gained carrier Jan 13 23:39:16.159535 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 23:39:16.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:16.171223 systemd[1]: Reached target network.target - Network. Jan 13 23:39:16.271487 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 23:39:16.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:16.281272 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 23:39:16.480050 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 23:39:16.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:16.480325 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:39:16.485280 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 23:39:16.517587 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 23:39:16.582706 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:39:16.594338 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 23:39:16.594391 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 13 23:39:16.594784 kernel: nvme nvme0: using unchecked data buffer Jan 13 23:39:16.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:16.598936 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 23:39:16.599369 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 23:39:16.608027 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:32:df:e7:5c:f7 Jan 13 23:39:16.614383 (udev-worker)[800]: Network interface NamePolicy= disabled on kernel command line. Jan 13 23:39:16.629994 systemd-networkd[750]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 13 23:39:16.630016 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 23:39:16.643187 systemd-networkd[750]: eth0: Link UP Jan 13 23:39:16.643553 systemd-networkd[750]: eth0: Gained carrier Jan 13 23:39:16.643578 systemd-networkd[750]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 13 23:39:16.668985 systemd-networkd[750]: eth0: DHCPv4 address 172.31.24.127/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 23:39:16.742739 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 23:39:16.766833 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 23:39:16.802588 disk-uuid[861]: Primary Header is updated. Jan 13 23:39:16.802588 disk-uuid[861]: Secondary Entries is updated. Jan 13 23:39:16.802588 disk-uuid[861]: Secondary Header is updated. Jan 13 23:39:16.853466 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 23:39:16.962390 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 23:39:16.991552 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 23:39:17.129674 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 23:39:17.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:17.135130 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 23:39:17.137853 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 23:39:17.149283 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 23:39:17.156292 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 23:39:17.210010 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 23:39:17.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:17.928466 disk-uuid[863]: Warning: The kernel is still using the old partition table. Jan 13 23:39:17.928466 disk-uuid[863]: The new table will be used at the next reboot or after you Jan 13 23:39:17.928466 disk-uuid[863]: run partprobe(8) or kpartx(8) Jan 13 23:39:17.928466 disk-uuid[863]: The operation has completed successfully. Jan 13 23:39:17.948316 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 23:39:17.948772 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 23:39:17.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:17.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:17.958335 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 23:39:18.009956 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1006) Jan 13 23:39:18.010021 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 43f26778-0bac-4551-a250-d0042cfe708e Jan 13 23:39:18.013879 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 23:39:18.061828 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 23:39:18.061888 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 13 23:39:18.072001 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 43f26778-0bac-4551-a250-d0042cfe708e Jan 13 23:39:18.073381 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 23:39:18.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:18.080002 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 23:39:18.564080 systemd-networkd[750]: eth0: Gained IPv6LL Jan 13 23:39:19.332685 ignition[1025]: Ignition 2.24.0 Jan 13 23:39:19.332713 ignition[1025]: Stage: fetch-offline Jan 13 23:39:19.333135 ignition[1025]: no configs at "/usr/lib/ignition/base.d" Jan 13 23:39:19.333164 ignition[1025]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 23:39:19.336758 ignition[1025]: Ignition finished successfully Jan 13 23:39:19.343580 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 23:39:19.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:19.351126 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 23:39:19.389077 ignition[1031]: Ignition 2.24.0 Jan 13 23:39:19.389098 ignition[1031]: Stage: fetch Jan 13 23:39:19.389470 ignition[1031]: no configs at "/usr/lib/ignition/base.d" Jan 13 23:39:19.389491 ignition[1031]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 23:39:19.389621 ignition[1031]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 23:39:19.407686 ignition[1031]: PUT result: OK Jan 13 23:39:19.411096 ignition[1031]: parsed url from cmdline: "" Jan 13 23:39:19.411222 ignition[1031]: no config URL provided Jan 13 23:39:19.411242 ignition[1031]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 23:39:19.411283 ignition[1031]: no config at "/usr/lib/ignition/user.ign" Jan 13 23:39:19.412270 ignition[1031]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 23:39:19.417880 ignition[1031]: PUT result: OK Jan 13 23:39:19.420410 ignition[1031]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 23:39:19.424584 ignition[1031]: GET result: OK Jan 13 23:39:19.426040 ignition[1031]: parsing config with SHA512: d10dabd92df1277d67d3e9ad2b7922408af134af7df2913b48172499670918b34dc4a4af8cdb31f47adabfe866f0bd7256c08799a3b8987dec89d995ce8cae94 Jan 13 23:39:19.436228 unknown[1031]: fetched base config from "system" Jan 13 23:39:19.436870 ignition[1031]: fetch: fetch complete Jan 13 23:39:19.436250 unknown[1031]: fetched base config from "system" Jan 13 23:39:19.436882 ignition[1031]: fetch: fetch passed Jan 13 23:39:19.436263 unknown[1031]: fetched user config from "aws" Jan 13 23:39:19.436998 ignition[1031]: Ignition finished successfully Jan 13 23:39:19.447762 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 23:39:19.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:19.454559 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 23:39:19.501866 ignition[1037]: Ignition 2.24.0 Jan 13 23:39:19.502416 ignition[1037]: Stage: kargs Jan 13 23:39:19.502792 ignition[1037]: no configs at "/usr/lib/ignition/base.d" Jan 13 23:39:19.502814 ignition[1037]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 23:39:19.502989 ignition[1037]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 23:39:19.512664 ignition[1037]: PUT result: OK Jan 13 23:39:19.517871 ignition[1037]: kargs: kargs passed Jan 13 23:39:19.518077 ignition[1037]: Ignition finished successfully Jan 13 23:39:19.524435 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 23:39:19.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:19.531116 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 23:39:19.570864 ignition[1043]: Ignition 2.24.0 Jan 13 23:39:19.571414 ignition[1043]: Stage: disks Jan 13 23:39:19.571785 ignition[1043]: no configs at "/usr/lib/ignition/base.d" Jan 13 23:39:19.571833 ignition[1043]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 23:39:19.572009 ignition[1043]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 23:39:19.582049 ignition[1043]: PUT result: OK Jan 13 23:39:19.587877 ignition[1043]: disks: disks passed Jan 13 23:39:19.588658 ignition[1043]: Ignition finished successfully Jan 13 23:39:19.593972 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 23:39:19.598640 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 23:39:19.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:19.599030 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 23:39:19.599205 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 23:39:19.599651 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 23:39:19.600359 systemd[1]: Reached target basic.target - Basic System. Jan 13 23:39:19.602248 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 23:39:19.741792 systemd-fsck[1051]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 13 23:39:19.748439 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 23:39:19.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:19.757838 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 23:39:20.020154 kernel: EXT4-fs (nvme0n1p9): mounted filesystem b1eb7e1a-01a1-41b0-9b3c-5a37b4853d4d r/w with ordered data mode. Quota mode: none. Jan 13 23:39:20.021236 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 23:39:20.022275 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 23:39:20.106791 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 23:39:20.110811 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 23:39:20.117573 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 23:39:20.117657 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 23:39:20.117715 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 23:39:20.147343 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 23:39:20.154329 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 23:39:20.164000 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1070) Jan 13 23:39:20.170017 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 43f26778-0bac-4551-a250-d0042cfe708e Jan 13 23:39:20.170087 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 23:39:20.178351 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 23:39:20.178972 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 13 23:39:20.181783 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 23:39:22.585936 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 23:39:22.596569 kernel: kauditd_printk_skb: 22 callbacks suppressed Jan 13 23:39:22.596613 kernel: audit: type=1130 audit(1768347562.586:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:22.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:22.591344 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 23:39:22.602509 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 23:39:22.633808 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 23:39:22.642946 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 43f26778-0bac-4551-a250-d0042cfe708e Jan 13 23:39:22.670985 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 23:39:22.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:22.687953 kernel: audit: type=1130 audit(1768347562.678:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:22.693458 ignition[1168]: INFO : Ignition 2.24.0 Jan 13 23:39:22.695598 ignition[1168]: INFO : Stage: mount Jan 13 23:39:22.695598 ignition[1168]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 23:39:22.695598 ignition[1168]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 23:39:22.695598 ignition[1168]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 23:39:22.706067 ignition[1168]: INFO : PUT result: OK Jan 13 23:39:22.711618 ignition[1168]: INFO : mount: mount passed Jan 13 23:39:22.716402 ignition[1168]: INFO : Ignition finished successfully Jan 13 23:39:22.720436 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 23:39:22.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:22.729881 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 23:39:22.734278 kernel: audit: type=1130 audit(1768347562.726:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:22.775036 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 23:39:22.810922 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1179) Jan 13 23:39:22.817888 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 43f26778-0bac-4551-a250-d0042cfe708e Jan 13 23:39:22.817956 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 23:39:22.824714 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 23:39:22.824768 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 13 23:39:22.827851 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 23:39:22.874392 ignition[1196]: INFO : Ignition 2.24.0 Jan 13 23:39:22.874392 ignition[1196]: INFO : Stage: files Jan 13 23:39:22.878311 ignition[1196]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 23:39:22.878311 ignition[1196]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 23:39:22.878311 ignition[1196]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 23:39:22.878311 ignition[1196]: INFO : PUT result: OK Jan 13 23:39:22.893429 ignition[1196]: DEBUG : files: compiled without relabeling support, skipping Jan 13 23:39:22.901750 ignition[1196]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 23:39:22.901750 ignition[1196]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 23:39:22.984777 ignition[1196]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 23:39:22.988090 ignition[1196]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 23:39:22.991602 unknown[1196]: wrote ssh authorized keys file for user: core Jan 13 23:39:22.994172 ignition[1196]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 23:39:23.001882 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 13 23:39:23.001882 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 13 23:39:23.097433 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 23:39:23.220814 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 13 23:39:23.226823 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 23:39:23.226823 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 23:39:23.226823 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 23:39:23.226823 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 23:39:23.226823 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 23:39:23.226823 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 23:39:23.226823 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 23:39:23.226823 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 23:39:23.258940 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 23:39:23.258940 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 23:39:23.258940 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 13 23:39:23.258940 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 13 23:39:23.258940 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 13 23:39:23.258940 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Jan 13 23:39:23.716139 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 23:39:24.077787 ignition[1196]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 13 23:39:24.083094 ignition[1196]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 23:39:24.174797 ignition[1196]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 23:39:24.184646 ignition[1196]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 23:39:24.184646 ignition[1196]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 23:39:24.191965 ignition[1196]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 23:39:24.195205 ignition[1196]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 23:39:24.198650 ignition[1196]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 23:39:24.203046 ignition[1196]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 23:39:24.207131 ignition[1196]: INFO : files: files passed Jan 13 23:39:24.208933 ignition[1196]: INFO : Ignition finished successfully Jan 13 23:39:24.213879 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 23:39:24.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.225063 kernel: audit: type=1130 audit(1768347564.217:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.224886 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 23:39:24.230030 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 23:39:24.260146 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 23:39:24.263100 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 23:39:24.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.279027 kernel: audit: type=1130 audit(1768347564.268:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.279100 kernel: audit: type=1131 audit(1768347564.268:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.285800 initrd-setup-root-after-ignition[1227]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 23:39:24.289788 initrd-setup-root-after-ignition[1227]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 23:39:24.293945 initrd-setup-root-after-ignition[1231]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 23:39:24.304022 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 23:39:24.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.307429 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 23:39:24.320473 kernel: audit: type=1130 audit(1768347564.302:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.321154 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 23:39:24.418061 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 23:39:24.437827 kernel: audit: type=1130 audit(1768347564.420:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.437867 kernel: audit: type=1131 audit(1768347564.420:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.418252 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 23:39:24.422659 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 23:39:24.429307 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 23:39:24.436344 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 23:39:24.442108 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 23:39:24.500509 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 23:39:24.516180 kernel: audit: type=1130 audit(1768347564.502:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.512142 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 23:39:24.553409 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 13 23:39:24.556008 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 23:39:24.559709 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 23:39:24.562031 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 23:39:24.567069 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 23:39:24.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.567301 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 23:39:24.576254 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 23:39:24.581321 systemd[1]: Stopped target basic.target - Basic System. Jan 13 23:39:24.583779 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 23:39:24.587777 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 23:39:24.592830 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 23:39:24.597870 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 13 23:39:24.602818 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 23:39:24.607551 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 23:39:24.612130 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 23:39:24.617493 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 23:39:24.622508 systemd[1]: Stopped target swap.target - Swaps. Jan 13 23:39:24.626798 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 23:39:24.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.627067 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 23:39:24.635266 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 23:39:24.637813 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 23:39:24.645080 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 23:39:24.645344 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 23:39:24.651485 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 23:39:24.652064 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 23:39:24.662634 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 23:39:24.662966 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 23:39:24.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.670944 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 23:39:24.671263 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 23:39:24.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.681153 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 23:39:24.690104 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 23:39:24.696516 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 23:39:24.697815 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 23:39:24.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.703265 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 23:39:24.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.703489 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 23:39:24.709188 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 23:39:24.709422 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 23:39:24.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.740432 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 23:39:24.740686 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 23:39:24.759583 ignition[1251]: INFO : Ignition 2.24.0 Jan 13 23:39:24.759583 ignition[1251]: INFO : Stage: umount Jan 13 23:39:24.766338 ignition[1251]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 23:39:24.766338 ignition[1251]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 23:39:24.766338 ignition[1251]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 23:39:24.774338 ignition[1251]: INFO : PUT result: OK Jan 13 23:39:24.783643 ignition[1251]: INFO : umount: umount passed Jan 13 23:39:24.789069 ignition[1251]: INFO : Ignition finished successfully Jan 13 23:39:24.792438 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 23:39:24.792862 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 23:39:24.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.800564 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 23:39:24.800830 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 23:39:24.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.808042 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 23:39:24.808253 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 23:39:24.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.815208 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 23:39:24.815868 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 23:39:24.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.822313 systemd[1]: Stopped target network.target - Network. Jan 13 23:39:24.826411 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 23:39:24.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.826530 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 23:39:24.829725 systemd[1]: Stopped target paths.target - Path Units. Jan 13 23:39:24.837244 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 23:39:24.842953 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 23:39:24.847834 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 23:39:24.855466 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 23:39:24.858796 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 23:39:24.858879 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 23:39:24.862338 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 23:39:24.862412 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 23:39:24.865567 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 13 23:39:24.865622 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 13 23:39:24.869836 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 23:39:24.870676 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 23:39:24.874682 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 23:39:24.874801 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 23:39:24.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.896612 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 23:39:24.902637 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 23:39:24.909388 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 23:39:24.924254 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 23:39:24.924669 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 23:39:24.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.939994 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 23:39:24.940385 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 23:39:24.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.952000 audit: BPF prog-id=9 op=UNLOAD Jan 13 23:39:24.953000 audit: BPF prog-id=6 op=UNLOAD Jan 13 23:39:24.955397 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 13 23:39:24.961109 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 23:39:24.961195 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 23:39:24.972377 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 23:39:24.978589 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 23:39:24.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.978727 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 23:39:24.980960 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 23:39:24.981049 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 23:39:24.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:24.993266 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 23:39:24.993934 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 23:39:25.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:25.003424 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 23:39:25.036019 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 23:39:25.036969 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 23:39:25.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:25.046747 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 23:39:25.047260 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 23:39:25.052219 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 23:39:25.052297 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 23:39:25.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:25.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:25.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:25.054934 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 23:39:25.060345 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 23:39:25.068339 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 23:39:25.068452 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 23:39:25.071154 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 23:39:25.071257 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 23:39:25.076447 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 23:39:25.079184 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 13 23:39:25.085130 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 23:39:25.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:25.110921 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 23:39:25.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:25.111044 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 23:39:25.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:25.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:25.116299 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 23:39:25.116404 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 23:39:25.119562 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 23:39:25.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:25.119661 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 23:39:25.119819 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 23:39:25.119938 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:39:25.138037 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 23:39:25.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:25.138959 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 23:39:25.155611 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 23:39:25.160666 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 23:39:25.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:25.177057 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 23:39:25.178531 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 23:39:25.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:25.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:25.188521 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 23:39:25.190317 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 23:39:25.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:25.196030 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 23:39:25.203119 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 23:39:25.242251 systemd[1]: Switching root. Jan 13 23:39:25.278445 systemd-journald[360]: Journal stopped Jan 13 23:39:29.159181 systemd-journald[360]: Received SIGTERM from PID 1 (systemd). Jan 13 23:39:29.159324 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 23:39:29.159367 kernel: SELinux: policy capability open_perms=1 Jan 13 23:39:29.159401 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 23:39:29.159434 kernel: SELinux: policy capability always_check_network=0 Jan 13 23:39:29.159467 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 23:39:29.159509 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 23:39:29.159544 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 23:39:29.167974 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 23:39:29.168025 kernel: SELinux: policy capability userspace_initial_context=0 Jan 13 23:39:29.168061 systemd[1]: Successfully loaded SELinux policy in 149.879ms. Jan 13 23:39:29.168117 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.763ms. Jan 13 23:39:29.168162 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 13 23:39:29.168196 systemd[1]: Detected virtualization amazon. Jan 13 23:39:29.168227 systemd[1]: Detected architecture arm64. Jan 13 23:39:29.168267 systemd[1]: Detected first boot. Jan 13 23:39:29.168300 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 13 23:39:29.168330 zram_generator::config[1297]: No configuration found. Jan 13 23:39:29.168368 kernel: NET: Registered PF_VSOCK protocol family Jan 13 23:39:29.168402 systemd[1]: Populated /etc with preset unit settings. Jan 13 23:39:29.168441 kernel: kauditd_printk_skb: 44 callbacks suppressed Jan 13 23:39:29.168476 kernel: audit: type=1334 audit(1768347568.422:88): prog-id=12 op=LOAD Jan 13 23:39:29.168510 kernel: audit: type=1334 audit(1768347568.424:89): prog-id=3 op=UNLOAD Jan 13 23:39:29.168540 kernel: audit: type=1334 audit(1768347568.425:90): prog-id=13 op=LOAD Jan 13 23:39:29.168571 kernel: audit: type=1334 audit(1768347568.428:91): prog-id=14 op=LOAD Jan 13 23:39:29.168602 kernel: audit: type=1334 audit(1768347568.428:92): prog-id=4 op=UNLOAD Jan 13 23:39:29.168632 kernel: audit: type=1334 audit(1768347568.428:93): prog-id=5 op=UNLOAD Jan 13 23:39:29.168664 kernel: audit: type=1131 audit(1768347568.430:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:29.168695 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 23:39:29.168727 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 23:39:29.168761 kernel: audit: type=1130 audit(1768347568.443:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:29.168795 kernel: audit: type=1131 audit(1768347568.443:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:29.168828 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 23:39:29.168861 kernel: audit: type=1334 audit(1768347568.454:97): prog-id=12 op=UNLOAD Jan 13 23:39:29.168892 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 23:39:29.168950 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 23:39:29.168981 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 23:39:29.169011 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 23:39:29.169042 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 23:39:29.169074 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 23:39:29.169111 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 23:39:29.169142 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 23:39:29.169175 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 23:39:29.169213 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 23:39:29.169246 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 23:39:29.169277 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 23:39:29.169399 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 23:39:29.181537 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 23:39:29.181606 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 23:39:29.181648 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 23:39:29.181682 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 23:39:29.181716 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 23:39:29.181749 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 23:39:29.181795 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 23:39:29.181826 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 23:39:29.181856 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 23:39:29.181888 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 23:39:29.181954 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 13 23:39:29.181989 systemd[1]: Reached target slices.target - Slice Units. Jan 13 23:39:29.182022 systemd[1]: Reached target swap.target - Swaps. Jan 13 23:39:29.182059 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 23:39:29.182092 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 23:39:29.182122 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 13 23:39:29.182153 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 13 23:39:29.182191 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 13 23:39:29.182221 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 23:39:29.182254 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 13 23:39:29.182287 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 13 23:39:29.182318 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 23:39:29.182348 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 23:39:29.182380 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 23:39:29.182410 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 23:39:29.182443 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 23:39:29.182474 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 23:39:29.182504 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 23:39:29.182538 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 23:39:29.182568 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 23:39:29.182602 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 23:39:29.182634 systemd[1]: Reached target machines.target - Containers. Jan 13 23:39:29.182664 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 23:39:29.182694 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 23:39:29.182729 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 23:39:29.182762 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 23:39:29.182792 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 23:39:29.182821 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 23:39:29.182855 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 23:39:29.182884 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 23:39:29.182942 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 23:39:29.182983 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 23:39:29.183014 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 23:39:29.183048 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 23:39:29.183078 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 23:39:29.183111 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 23:39:29.183142 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 13 23:39:29.183176 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 23:39:29.183205 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 23:39:29.183235 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 23:39:29.183267 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 23:39:29.183298 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 13 23:39:29.183334 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 23:39:29.183365 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 23:39:29.183395 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 23:39:29.183428 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 23:39:29.183457 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 23:39:29.183488 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 23:39:29.183518 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 23:39:29.183555 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 23:39:29.183586 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 23:39:29.183619 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 23:39:29.183657 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 23:39:29.183702 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 23:39:29.183735 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 23:39:29.183765 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 23:39:29.183796 kernel: fuse: init (API version 7.41) Jan 13 23:39:29.183828 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 23:39:29.183858 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 23:39:29.183890 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 23:39:29.197282 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 23:39:29.197329 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 23:39:29.197362 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 23:39:29.197395 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 23:39:29.197431 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 23:39:29.197461 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 13 23:39:29.197491 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 23:39:29.197522 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 23:39:29.197559 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 23:39:29.197590 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 23:39:29.197620 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 13 23:39:29.197656 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 23:39:29.197686 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 13 23:39:29.197716 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 23:39:29.197749 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 23:39:29.197782 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 23:39:29.197816 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 23:39:29.197850 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 23:39:29.200058 systemd-journald[1375]: Collecting audit messages is enabled. Jan 13 23:39:29.200165 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 23:39:29.200200 kernel: ACPI: bus type drm_connector registered Jan 13 23:39:29.200232 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 23:39:29.200263 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 23:39:29.200301 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 23:39:29.200333 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 13 23:39:29.200363 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 23:39:29.200393 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 23:39:29.200423 systemd-journald[1375]: Journal started Jan 13 23:39:29.200471 systemd-journald[1375]: Runtime Journal (/run/log/journal/ec2e741dd8cd12b757bcf4d3d0c95487) is 8M, max 75.3M, 67.3M free. Jan 13 23:39:28.595000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 13 23:39:28.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:28.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:28.831000 audit: BPF prog-id=14 op=UNLOAD Jan 13 23:39:28.831000 audit: BPF prog-id=13 op=UNLOAD Jan 13 23:39:28.833000 audit: BPF prog-id=15 op=LOAD Jan 13 23:39:28.833000 audit: BPF prog-id=16 op=LOAD Jan 13 23:39:28.833000 audit: BPF prog-id=17 op=LOAD Jan 13 23:39:28.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:28.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:28.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:28.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:28.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:28.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:28.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:28.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:28.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:29.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:29.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:29.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:29.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:29.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:29.145000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 13 23:39:29.145000 audit[1375]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc11a6310 a2=4000 a3=0 items=0 ppid=1 pid=1375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:29.145000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 13 23:39:29.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:29.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:29.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:28.400867 systemd[1]: Queued start job for default target multi-user.target. Jan 13 23:39:29.227056 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 23:39:29.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:28.431045 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 23:39:28.431932 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 23:39:29.223829 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 23:39:29.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:29.233072 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 23:39:29.237418 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 23:39:29.248363 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 13 23:39:29.351612 systemd-journald[1375]: Time spent on flushing to /var/log/journal/ec2e741dd8cd12b757bcf4d3d0c95487 is 79.108ms for 1057 entries. Jan 13 23:39:29.351612 systemd-journald[1375]: System Journal (/var/log/journal/ec2e741dd8cd12b757bcf4d3d0c95487) is 8M, max 588.1M, 580.1M free. Jan 13 23:39:29.462407 systemd-journald[1375]: Received client request to flush runtime journal. Jan 13 23:39:29.462475 kernel: loop1: detected capacity change from 0 to 45344 Jan 13 23:39:29.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:29.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:29.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:29.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:29.374103 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 23:39:29.386544 systemd-tmpfiles[1402]: ACLs are not supported, ignoring. Jan 13 23:39:29.386570 systemd-tmpfiles[1402]: ACLs are not supported, ignoring. Jan 13 23:39:29.392658 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 13 23:39:29.407879 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 23:39:29.437919 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 23:39:29.447834 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 23:39:29.468083 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 23:39:29.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:29.568095 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 23:39:29.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:29.575347 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 23:39:29.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:29.647141 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 23:39:29.651000 audit: BPF prog-id=18 op=LOAD Jan 13 23:39:29.651000 audit: BPF prog-id=19 op=LOAD Jan 13 23:39:29.652000 audit: BPF prog-id=20 op=LOAD Jan 13 23:39:29.655233 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 13 23:39:29.662000 audit: BPF prog-id=21 op=LOAD Jan 13 23:39:29.667209 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 23:39:29.672464 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 23:39:29.690000 audit: BPF prog-id=22 op=LOAD Jan 13 23:39:29.691000 audit: BPF prog-id=23 op=LOAD Jan 13 23:39:29.691000 audit: BPF prog-id=24 op=LOAD Jan 13 23:39:29.700000 audit: BPF prog-id=25 op=LOAD Jan 13 23:39:29.700000 audit: BPF prog-id=26 op=LOAD Jan 13 23:39:29.700000 audit: BPF prog-id=27 op=LOAD Jan 13 23:39:29.698350 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 13 23:39:29.708894 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 23:39:29.718013 kernel: loop2: detected capacity change from 0 to 200800 Jan 13 23:39:29.744083 systemd-tmpfiles[1454]: ACLs are not supported, ignoring. Jan 13 23:39:29.744114 systemd-tmpfiles[1454]: ACLs are not supported, ignoring. Jan 13 23:39:29.752704 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 23:39:29.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:29.816146 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 23:39:29.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:29.863395 systemd-nsresourced[1456]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 13 23:39:29.866380 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 13 23:39:29.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:29.986938 kernel: loop3: detected capacity change from 0 to 100192 Jan 13 23:39:30.055377 systemd-oomd[1452]: No swap; memory pressure usage will be degraded Jan 13 23:39:30.056341 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 13 23:39:30.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:30.148182 systemd-resolved[1453]: Positive Trust Anchors: Jan 13 23:39:30.148674 systemd-resolved[1453]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 23:39:30.148768 systemd-resolved[1453]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 13 23:39:30.148944 systemd-resolved[1453]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 23:39:30.164430 systemd-resolved[1453]: Defaulting to hostname 'linux'. Jan 13 23:39:30.167028 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 23:39:30.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:30.172307 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 23:39:30.286970 kernel: loop4: detected capacity change from 0 to 61504 Jan 13 23:39:30.441978 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 23:39:30.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:30.444000 audit: BPF prog-id=8 op=UNLOAD Jan 13 23:39:30.445000 audit: BPF prog-id=7 op=UNLOAD Jan 13 23:39:30.446000 audit: BPF prog-id=28 op=LOAD Jan 13 23:39:30.446000 audit: BPF prog-id=29 op=LOAD Jan 13 23:39:30.449628 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 23:39:30.525402 systemd-udevd[1479]: Using default interface naming scheme 'v257'. Jan 13 23:39:30.624951 kernel: loop5: detected capacity change from 0 to 45344 Jan 13 23:39:30.643958 kernel: loop6: detected capacity change from 0 to 200800 Jan 13 23:39:30.669980 kernel: loop7: detected capacity change from 0 to 100192 Jan 13 23:39:30.687956 kernel: loop1: detected capacity change from 0 to 61504 Jan 13 23:39:30.701312 (sd-merge)[1481]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Jan 13 23:39:30.710396 (sd-merge)[1481]: Merged extensions into '/usr'. Jan 13 23:39:30.718945 systemd[1]: Reload requested from client PID 1401 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 23:39:30.719120 systemd[1]: Reloading... Jan 13 23:39:30.888511 (udev-worker)[1490]: Network interface NamePolicy= disabled on kernel command line. Jan 13 23:39:30.960946 zram_generator::config[1540]: No configuration found. Jan 13 23:39:31.533378 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 23:39:31.533553 systemd[1]: Reloading finished in 813 ms. Jan 13 23:39:31.561614 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 23:39:31.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:31.567990 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 23:39:31.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:31.620361 systemd[1]: Starting ensure-sysext.service... Jan 13 23:39:31.624000 audit: BPF prog-id=30 op=LOAD Jan 13 23:39:31.628279 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 23:39:31.639294 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 23:39:31.650196 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 23:39:31.652000 audit: BPF prog-id=31 op=LOAD Jan 13 23:39:31.652000 audit: BPF prog-id=21 op=UNLOAD Jan 13 23:39:31.655000 audit: BPF prog-id=32 op=LOAD Jan 13 23:39:31.655000 audit: BPF prog-id=25 op=UNLOAD Jan 13 23:39:31.655000 audit: BPF prog-id=33 op=LOAD Jan 13 23:39:31.655000 audit: BPF prog-id=34 op=LOAD Jan 13 23:39:31.655000 audit: BPF prog-id=26 op=UNLOAD Jan 13 23:39:31.655000 audit: BPF prog-id=27 op=UNLOAD Jan 13 23:39:31.659000 audit: BPF prog-id=35 op=LOAD Jan 13 23:39:31.660000 audit: BPF prog-id=22 op=UNLOAD Jan 13 23:39:31.660000 audit: BPF prog-id=36 op=LOAD Jan 13 23:39:31.660000 audit: BPF prog-id=37 op=LOAD Jan 13 23:39:31.660000 audit: BPF prog-id=23 op=UNLOAD Jan 13 23:39:31.660000 audit: BPF prog-id=24 op=UNLOAD Jan 13 23:39:31.661000 audit: BPF prog-id=38 op=LOAD Jan 13 23:39:31.661000 audit: BPF prog-id=39 op=LOAD Jan 13 23:39:31.662000 audit: BPF prog-id=28 op=UNLOAD Jan 13 23:39:31.662000 audit: BPF prog-id=29 op=UNLOAD Jan 13 23:39:31.664000 audit: BPF prog-id=40 op=LOAD Jan 13 23:39:31.665000 audit: BPF prog-id=18 op=UNLOAD Jan 13 23:39:31.665000 audit: BPF prog-id=41 op=LOAD Jan 13 23:39:31.667000 audit: BPF prog-id=42 op=LOAD Jan 13 23:39:31.667000 audit: BPF prog-id=19 op=UNLOAD Jan 13 23:39:31.667000 audit: BPF prog-id=20 op=UNLOAD Jan 13 23:39:31.671000 audit: BPF prog-id=43 op=LOAD Jan 13 23:39:31.673000 audit: BPF prog-id=15 op=UNLOAD Jan 13 23:39:31.673000 audit: BPF prog-id=44 op=LOAD Jan 13 23:39:31.673000 audit: BPF prog-id=45 op=LOAD Jan 13 23:39:31.673000 audit: BPF prog-id=16 op=UNLOAD Jan 13 23:39:31.673000 audit: BPF prog-id=17 op=UNLOAD Jan 13 23:39:31.714589 systemd[1]: Reload requested from client PID 1615 ('systemctl') (unit ensure-sysext.service)... Jan 13 23:39:31.714607 systemd[1]: Reloading... Jan 13 23:39:31.809831 systemd-tmpfiles[1619]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 13 23:39:31.811334 systemd-tmpfiles[1619]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 13 23:39:31.812201 systemd-tmpfiles[1619]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 23:39:31.816406 systemd-tmpfiles[1619]: ACLs are not supported, ignoring. Jan 13 23:39:31.818156 systemd-tmpfiles[1619]: ACLs are not supported, ignoring. Jan 13 23:39:31.835934 systemd-tmpfiles[1619]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 23:39:31.836111 systemd-tmpfiles[1619]: Skipping /boot Jan 13 23:39:31.866509 systemd-tmpfiles[1619]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 23:39:31.866678 systemd-tmpfiles[1619]: Skipping /boot Jan 13 23:39:32.004985 zram_generator::config[1731]: No configuration found. Jan 13 23:39:32.093470 systemd-networkd[1617]: lo: Link UP Jan 13 23:39:32.094037 systemd-networkd[1617]: lo: Gained carrier Jan 13 23:39:32.099862 systemd-networkd[1617]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 13 23:39:32.100071 systemd-networkd[1617]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 23:39:32.102858 systemd-networkd[1617]: eth0: Link UP Jan 13 23:39:32.103525 systemd-networkd[1617]: eth0: Gained carrier Jan 13 23:39:32.104992 systemd-networkd[1617]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 13 23:39:32.119048 systemd-networkd[1617]: eth0: DHCPv4 address 172.31.24.127/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 23:39:32.504621 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 23:39:32.508412 systemd[1]: Reloading finished in 792 ms. Jan 13 23:39:32.534038 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 23:39:32.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:32.539000 audit: BPF prog-id=46 op=LOAD Jan 13 23:39:32.539000 audit: BPF prog-id=31 op=UNLOAD Jan 13 23:39:32.541000 audit: BPF prog-id=47 op=LOAD Jan 13 23:39:32.541000 audit: BPF prog-id=40 op=UNLOAD Jan 13 23:39:32.541000 audit: BPF prog-id=48 op=LOAD Jan 13 23:39:32.541000 audit: BPF prog-id=49 op=LOAD Jan 13 23:39:32.542000 audit: BPF prog-id=41 op=UNLOAD Jan 13 23:39:32.542000 audit: BPF prog-id=42 op=UNLOAD Jan 13 23:39:32.543000 audit: BPF prog-id=50 op=LOAD Jan 13 23:39:32.543000 audit: BPF prog-id=30 op=UNLOAD Jan 13 23:39:32.544000 audit: BPF prog-id=51 op=LOAD Jan 13 23:39:32.545000 audit: BPF prog-id=35 op=UNLOAD Jan 13 23:39:32.545000 audit: BPF prog-id=52 op=LOAD Jan 13 23:39:32.551000 audit: BPF prog-id=53 op=LOAD Jan 13 23:39:32.551000 audit: BPF prog-id=36 op=UNLOAD Jan 13 23:39:32.551000 audit: BPF prog-id=37 op=UNLOAD Jan 13 23:39:32.554000 audit: BPF prog-id=54 op=LOAD Jan 13 23:39:32.554000 audit: BPF prog-id=32 op=UNLOAD Jan 13 23:39:32.554000 audit: BPF prog-id=55 op=LOAD Jan 13 23:39:32.555000 audit: BPF prog-id=56 op=LOAD Jan 13 23:39:32.555000 audit: BPF prog-id=33 op=UNLOAD Jan 13 23:39:32.555000 audit: BPF prog-id=34 op=UNLOAD Jan 13 23:39:32.556000 audit: BPF prog-id=57 op=LOAD Jan 13 23:39:32.556000 audit: BPF prog-id=58 op=LOAD Jan 13 23:39:32.556000 audit: BPF prog-id=38 op=UNLOAD Jan 13 23:39:32.556000 audit: BPF prog-id=39 op=UNLOAD Jan 13 23:39:32.559000 audit: BPF prog-id=59 op=LOAD Jan 13 23:39:32.559000 audit: BPF prog-id=43 op=UNLOAD Jan 13 23:39:32.560000 audit: BPF prog-id=60 op=LOAD Jan 13 23:39:32.560000 audit: BPF prog-id=61 op=LOAD Jan 13 23:39:32.560000 audit: BPF prog-id=44 op=UNLOAD Jan 13 23:39:32.560000 audit: BPF prog-id=45 op=UNLOAD Jan 13 23:39:32.569054 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 23:39:32.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:32.576129 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:39:32.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:32.636070 systemd[1]: Reached target network.target - Network. Jan 13 23:39:32.640687 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 23:39:32.647374 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 23:39:32.654883 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 23:39:32.662014 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 23:39:32.669880 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 23:39:32.678555 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 23:39:32.682488 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 23:39:32.682960 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 13 23:39:32.687336 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 23:39:32.693577 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 23:39:32.696372 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 13 23:39:32.701052 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 23:39:32.709573 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 13 23:39:32.716590 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 23:39:32.730562 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 23:39:32.742236 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 23:39:32.742640 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 23:39:32.742991 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 13 23:39:32.743207 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 13 23:39:32.754418 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 23:39:32.782313 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 23:39:32.785297 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 23:39:32.785717 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 13 23:39:32.785998 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 13 23:39:32.786376 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 23:39:32.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:32.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:32.796339 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 23:39:32.798488 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 23:39:32.812016 systemd[1]: Finished ensure-sysext.service. Jan 13 23:39:32.815064 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 23:39:32.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:32.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:32.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:32.815520 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 23:39:32.822237 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 23:39:32.830519 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 23:39:32.831116 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 23:39:32.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:32.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:32.834780 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 23:39:32.837000 audit[1794]: SYSTEM_BOOT pid=1794 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 13 23:39:32.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:32.852294 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 23:39:32.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:32.868408 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 13 23:39:32.881642 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 23:39:32.883655 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 23:39:32.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:32.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:32.894189 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 23:39:32.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:32.905488 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 23:39:32.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:32.991000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 13 23:39:32.991000 audit[1825]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff9e04f70 a2=420 a3=0 items=0 ppid=1783 pid=1825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:32.991000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 13 23:39:32.993502 augenrules[1825]: No rules Jan 13 23:39:32.996197 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 23:39:32.998043 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 23:39:33.040620 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 23:39:33.046296 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 23:39:33.860140 systemd-networkd[1617]: eth0: Gained IPv6LL Jan 13 23:39:33.864067 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 23:39:33.868025 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 23:39:36.068857 ldconfig[1788]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 23:39:36.083029 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 23:39:36.089199 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 23:39:36.120599 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 23:39:36.124000 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 23:39:36.126836 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 23:39:36.129934 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 23:39:36.133412 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 23:39:36.136299 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 23:39:36.139652 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 13 23:39:36.142974 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 13 23:39:36.146035 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 23:39:36.149100 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 23:39:36.149158 systemd[1]: Reached target paths.target - Path Units. Jan 13 23:39:36.151398 systemd[1]: Reached target timers.target - Timer Units. Jan 13 23:39:36.156534 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 23:39:36.166355 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 23:39:36.173758 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 13 23:39:36.177427 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 13 23:39:36.180753 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 13 23:39:36.196399 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 23:39:36.199645 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 13 23:39:36.203735 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 23:39:36.206644 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 23:39:36.209140 systemd[1]: Reached target basic.target - Basic System. Jan 13 23:39:36.211719 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 23:39:36.212081 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 23:39:36.216176 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 23:39:36.225233 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 23:39:36.229721 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 23:39:36.236502 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 23:39:36.248778 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 23:39:36.257131 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 23:39:36.261511 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 23:39:36.269106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:39:36.278168 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 23:39:36.295462 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 23:39:36.303620 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 23:39:36.310442 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 23:39:36.322054 jq[1842]: false Jan 13 23:39:36.332203 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 23:39:36.338386 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 23:39:36.356212 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 23:39:36.378344 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 23:39:36.381131 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 23:39:36.382208 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 23:39:36.386281 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 23:39:36.393776 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 23:39:36.414002 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 23:39:36.417891 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 23:39:36.418453 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 23:39:36.433818 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 23:39:36.441825 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 23:39:36.464501 extend-filesystems[1843]: Found /dev/nvme0n1p6 Jan 13 23:39:36.470138 jq[1859]: true Jan 13 23:39:36.510937 extend-filesystems[1843]: Found /dev/nvme0n1p9 Jan 13 23:39:36.517491 extend-filesystems[1843]: Checking size of /dev/nvme0n1p9 Jan 13 23:39:36.570802 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 23:39:36.581702 ntpd[1846]: 13 Jan 23:39:36 ntpd[1846]: ntpd 4.2.8p18@1.4062-o Tue Jan 13 21:06:29 UTC 2026 (1): Starting Jan 13 23:39:36.581702 ntpd[1846]: 13 Jan 23:39:36 ntpd[1846]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 23:39:36.581702 ntpd[1846]: 13 Jan 23:39:36 ntpd[1846]: ---------------------------------------------------- Jan 13 23:39:36.581702 ntpd[1846]: 13 Jan 23:39:36 ntpd[1846]: ntp-4 is maintained by Network Time Foundation, Jan 13 23:39:36.581702 ntpd[1846]: 13 Jan 23:39:36 ntpd[1846]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 23:39:36.581702 ntpd[1846]: 13 Jan 23:39:36 ntpd[1846]: corporation. Support and training for ntp-4 are Jan 13 23:39:36.581702 ntpd[1846]: 13 Jan 23:39:36 ntpd[1846]: available at https://www.nwtime.org/support Jan 13 23:39:36.581702 ntpd[1846]: 13 Jan 23:39:36 ntpd[1846]: ---------------------------------------------------- Jan 13 23:39:36.581702 ntpd[1846]: 13 Jan 23:39:36 ntpd[1846]: proto: precision = 0.096 usec (-23) Jan 13 23:39:36.581702 ntpd[1846]: 13 Jan 23:39:36 ntpd[1846]: basedate set to 2026-01-01 Jan 13 23:39:36.581702 ntpd[1846]: 13 Jan 23:39:36 ntpd[1846]: gps base set to 2026-01-04 (week 2400) Jan 13 23:39:36.581702 ntpd[1846]: 13 Jan 23:39:36 ntpd[1846]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 23:39:36.581702 ntpd[1846]: 13 Jan 23:39:36 ntpd[1846]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 23:39:36.581702 ntpd[1846]: 13 Jan 23:39:36 ntpd[1846]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 23:39:36.581702 ntpd[1846]: 13 Jan 23:39:36 ntpd[1846]: Listen normally on 3 eth0 172.31.24.127:123 Jan 13 23:39:36.581702 ntpd[1846]: 13 Jan 23:39:36 ntpd[1846]: Listen normally on 4 lo [::1]:123 Jan 13 23:39:36.581702 ntpd[1846]: 13 Jan 23:39:36 ntpd[1846]: Listen normally on 5 eth0 [fe80::432:dfff:fee7:5cf7%2]:123 Jan 13 23:39:36.570302 dbus-daemon[1840]: [system] SELinux support is enabled Jan 13 23:39:36.627685 ntpd[1846]: 13 Jan 23:39:36 ntpd[1846]: Listening on routing socket on fd #22 for interface updates Jan 13 23:39:36.627685 ntpd[1846]: 13 Jan 23:39:36 ntpd[1846]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 23:39:36.627685 ntpd[1846]: 13 Jan 23:39:36 ntpd[1846]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 23:39:36.573653 ntpd[1846]: ntpd 4.2.8p18@1.4062-o Tue Jan 13 21:06:29 UTC 2026 (1): Starting Jan 13 23:39:36.602737 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 23:39:36.638445 tar[1862]: linux-arm64/LICENSE Jan 13 23:39:36.638445 tar[1862]: linux-arm64/helm Jan 13 23:39:36.573769 ntpd[1846]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 23:39:36.605070 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 23:39:36.573791 ntpd[1846]: ---------------------------------------------------- Jan 13 23:39:36.613561 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 23:39:36.573810 ntpd[1846]: ntp-4 is maintained by Network Time Foundation, Jan 13 23:39:36.613635 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 23:39:36.573827 ntpd[1846]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 23:39:36.620122 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 23:39:36.573845 ntpd[1846]: corporation. Support and training for ntp-4 are Jan 13 23:39:36.620165 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 23:39:36.573862 ntpd[1846]: available at https://www.nwtime.org/support Jan 13 23:39:36.648447 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 23:39:36.573879 ntpd[1846]: ---------------------------------------------------- Jan 13 23:39:36.579162 dbus-daemon[1840]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1617 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 23:39:36.579614 ntpd[1846]: proto: precision = 0.096 usec (-23) Jan 13 23:39:36.580677 ntpd[1846]: basedate set to 2026-01-01 Jan 13 23:39:36.580710 ntpd[1846]: gps base set to 2026-01-04 (week 2400) Jan 13 23:39:36.580966 ntpd[1846]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 23:39:36.581024 ntpd[1846]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 23:39:36.581521 ntpd[1846]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 23:39:36.581578 ntpd[1846]: Listen normally on 3 eth0 172.31.24.127:123 Jan 13 23:39:36.581629 ntpd[1846]: Listen normally on 4 lo [::1]:123 Jan 13 23:39:36.581682 ntpd[1846]: Listen normally on 5 eth0 [fe80::432:dfff:fee7:5cf7%2]:123 Jan 13 23:39:36.581728 ntpd[1846]: Listening on routing socket on fd #22 for interface updates Jan 13 23:39:36.594224 ntpd[1846]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 23:39:36.594291 ntpd[1846]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 23:39:36.617805 dbus-daemon[1840]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 23:39:36.681390 jq[1876]: true Jan 13 23:39:36.707640 extend-filesystems[1843]: Resized partition /dev/nvme0n1p9 Jan 13 23:39:36.719705 extend-filesystems[1908]: resize2fs 1.47.3 (8-Jul-2025) Jan 13 23:39:36.778939 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Jan 13 23:39:36.807042 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 23:39:36.827943 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Jan 13 23:39:36.848320 update_engine[1858]: I20260113 23:39:36.832583 1858 main.cc:92] Flatcar Update Engine starting Jan 13 23:39:36.852282 systemd-logind[1857]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 23:39:36.903288 extend-filesystems[1908]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 23:39:36.903288 extend-filesystems[1908]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 13 23:39:36.903288 extend-filesystems[1908]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Jan 13 23:39:36.852361 systemd-logind[1857]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 13 23:39:36.943623 update_engine[1858]: I20260113 23:39:36.866233 1858 update_check_scheduler.cc:74] Next update check in 6m45s Jan 13 23:39:36.943683 extend-filesystems[1843]: Resized filesystem in /dev/nvme0n1p9 Jan 13 23:39:36.853436 systemd-logind[1857]: New seat seat0. Jan 13 23:39:36.856012 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 23:39:36.860445 systemd[1]: Started update-engine.service - Update Engine. Jan 13 23:39:36.864431 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 23:39:36.871312 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 23:39:36.885756 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 23:39:36.893131 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 23:39:36.906090 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 23:39:37.067067 coreos-metadata[1839]: Jan 13 23:39:37.064 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 23:39:37.070184 coreos-metadata[1839]: Jan 13 23:39:37.070 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 23:39:37.074938 bash[1949]: Updated "/home/core/.ssh/authorized_keys" Jan 13 23:39:37.075472 coreos-metadata[1839]: Jan 13 23:39:37.075 INFO Fetch successful Jan 13 23:39:37.075472 coreos-metadata[1839]: Jan 13 23:39:37.075 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 23:39:37.078073 coreos-metadata[1839]: Jan 13 23:39:37.077 INFO Fetch successful Jan 13 23:39:37.078073 coreos-metadata[1839]: Jan 13 23:39:37.078 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 23:39:37.079024 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 23:39:37.088569 systemd[1]: Starting sshkeys.service... Jan 13 23:39:37.093396 coreos-metadata[1839]: Jan 13 23:39:37.088 INFO Fetch successful Jan 13 23:39:37.093396 coreos-metadata[1839]: Jan 13 23:39:37.089 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 23:39:37.095187 coreos-metadata[1839]: Jan 13 23:39:37.094 INFO Fetch successful Jan 13 23:39:37.095187 coreos-metadata[1839]: Jan 13 23:39:37.094 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 23:39:37.095187 coreos-metadata[1839]: Jan 13 23:39:37.094 INFO Fetch failed with 404: resource not found Jan 13 23:39:37.095691 coreos-metadata[1839]: Jan 13 23:39:37.095 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 23:39:37.099137 coreos-metadata[1839]: Jan 13 23:39:37.098 INFO Fetch successful Jan 13 23:39:37.099137 coreos-metadata[1839]: Jan 13 23:39:37.099 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 23:39:37.111277 coreos-metadata[1839]: Jan 13 23:39:37.111 INFO Fetch successful Jan 13 23:39:37.111277 coreos-metadata[1839]: Jan 13 23:39:37.111 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 23:39:37.115574 coreos-metadata[1839]: Jan 13 23:39:37.115 INFO Fetch successful Jan 13 23:39:37.115574 coreos-metadata[1839]: Jan 13 23:39:37.115 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 23:39:37.126453 coreos-metadata[1839]: Jan 13 23:39:37.123 INFO Fetch successful Jan 13 23:39:37.126453 coreos-metadata[1839]: Jan 13 23:39:37.126 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 23:39:37.128064 coreos-metadata[1839]: Jan 13 23:39:37.127 INFO Fetch successful Jan 13 23:39:37.142986 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 23:39:37.152795 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 23:39:37.332393 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 23:39:37.337428 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 23:39:37.421137 coreos-metadata[1965]: Jan 13 23:39:37.420 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 23:39:37.429002 coreos-metadata[1965]: Jan 13 23:39:37.428 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 23:39:37.434502 coreos-metadata[1965]: Jan 13 23:39:37.433 INFO Fetch successful Jan 13 23:39:37.434502 coreos-metadata[1965]: Jan 13 23:39:37.434 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 23:39:37.435337 coreos-metadata[1965]: Jan 13 23:39:37.435 INFO Fetch successful Jan 13 23:39:37.441393 unknown[1965]: wrote ssh authorized keys file for user: core Jan 13 23:39:37.591261 update-ssh-keys[2003]: Updated "/home/core/.ssh/authorized_keys" Jan 13 23:39:37.601292 amazon-ssm-agent[1927]: Initializing new seelog logger Jan 13 23:39:37.596125 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 23:39:37.612573 amazon-ssm-agent[1927]: New Seelog Logger Creation Complete Jan 13 23:39:37.612573 amazon-ssm-agent[1927]: 2026/01/13 23:39:37 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:39:37.612573 amazon-ssm-agent[1927]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:39:37.612573 amazon-ssm-agent[1927]: 2026/01/13 23:39:37 processing appconfig overrides Jan 13 23:39:37.614109 systemd[1]: Finished sshkeys.service. Jan 13 23:39:37.632288 containerd[1894]: time="2026-01-13T23:39:37Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 13 23:39:37.632288 containerd[1894]: time="2026-01-13T23:39:37.623743761Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 13 23:39:37.640132 amazon-ssm-agent[1927]: 2026/01/13 23:39:37 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:39:37.640132 amazon-ssm-agent[1927]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:39:37.640132 amazon-ssm-agent[1927]: 2026/01/13 23:39:37 processing appconfig overrides Jan 13 23:39:37.640132 amazon-ssm-agent[1927]: 2026/01/13 23:39:37 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:39:37.640132 amazon-ssm-agent[1927]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:39:37.640132 amazon-ssm-agent[1927]: 2026/01/13 23:39:37 processing appconfig overrides Jan 13 23:39:37.644295 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 23:39:37.649716 dbus-daemon[1840]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 23:39:37.652703 amazon-ssm-agent[1927]: 2026-01-13 23:39:37.6347 INFO Proxy environment variables: Jan 13 23:39:37.653716 dbus-daemon[1840]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1901 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 23:39:37.665134 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 23:39:37.673570 amazon-ssm-agent[1927]: 2026/01/13 23:39:37 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:39:37.673570 amazon-ssm-agent[1927]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:39:37.673737 amazon-ssm-agent[1927]: 2026/01/13 23:39:37 processing appconfig overrides Jan 13 23:39:37.678658 containerd[1894]: time="2026-01-13T23:39:37.678487114Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.932µs" Jan 13 23:39:37.678658 containerd[1894]: time="2026-01-13T23:39:37.678560290Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 13 23:39:37.678658 containerd[1894]: time="2026-01-13T23:39:37.678638998Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 13 23:39:37.678886 containerd[1894]: time="2026-01-13T23:39:37.678669802Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 13 23:39:37.679097 containerd[1894]: time="2026-01-13T23:39:37.679025422Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 13 23:39:37.679173 containerd[1894]: time="2026-01-13T23:39:37.679093594Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 13 23:39:37.680582 containerd[1894]: time="2026-01-13T23:39:37.679249438Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 13 23:39:37.680582 containerd[1894]: time="2026-01-13T23:39:37.679308022Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 13 23:39:37.689333 containerd[1894]: time="2026-01-13T23:39:37.679855822Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 13 23:39:37.689466 containerd[1894]: time="2026-01-13T23:39:37.689326114Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 13 23:39:37.689794 containerd[1894]: time="2026-01-13T23:39:37.689724694Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 13 23:39:37.689794 containerd[1894]: time="2026-01-13T23:39:37.689785510Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 13 23:39:37.692414 containerd[1894]: time="2026-01-13T23:39:37.692324254Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 13 23:39:37.692414 containerd[1894]: time="2026-01-13T23:39:37.692389930Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 13 23:39:37.692677 containerd[1894]: time="2026-01-13T23:39:37.692617414Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 13 23:39:37.698553 containerd[1894]: time="2026-01-13T23:39:37.698480410Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 13 23:39:37.698680 containerd[1894]: time="2026-01-13T23:39:37.698586442Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 13 23:39:37.698680 containerd[1894]: time="2026-01-13T23:39:37.698616766Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 13 23:39:37.698882 containerd[1894]: time="2026-01-13T23:39:37.698806342Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 13 23:39:37.706401 containerd[1894]: time="2026-01-13T23:39:37.706275058Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 13 23:39:37.706635 containerd[1894]: time="2026-01-13T23:39:37.706573738Z" level=info msg="metadata content store policy set" policy=shared Jan 13 23:39:37.724991 containerd[1894]: time="2026-01-13T23:39:37.724025182Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 13 23:39:37.724991 containerd[1894]: time="2026-01-13T23:39:37.724136242Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 13 23:39:37.724991 containerd[1894]: time="2026-01-13T23:39:37.724312738Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 13 23:39:37.724991 containerd[1894]: time="2026-01-13T23:39:37.724348702Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 13 23:39:37.724991 containerd[1894]: time="2026-01-13T23:39:37.724382074Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 13 23:39:37.724991 containerd[1894]: time="2026-01-13T23:39:37.724411078Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 13 23:39:37.724991 containerd[1894]: time="2026-01-13T23:39:37.724438966Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 13 23:39:37.724991 containerd[1894]: time="2026-01-13T23:39:37.724468234Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 13 23:39:37.724991 containerd[1894]: time="2026-01-13T23:39:37.724511002Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 13 23:39:37.724991 containerd[1894]: time="2026-01-13T23:39:37.724541866Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 13 23:39:37.724991 containerd[1894]: time="2026-01-13T23:39:37.724568134Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 13 23:39:37.724991 containerd[1894]: time="2026-01-13T23:39:37.724593994Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 13 23:39:37.724991 containerd[1894]: time="2026-01-13T23:39:37.724849798Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 13 23:39:37.724991 containerd[1894]: time="2026-01-13T23:39:37.724944166Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 13 23:39:37.725650 containerd[1894]: time="2026-01-13T23:39:37.725306866Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 13 23:39:37.725650 containerd[1894]: time="2026-01-13T23:39:37.725367226Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 13 23:39:37.725650 containerd[1894]: time="2026-01-13T23:39:37.725421610Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 13 23:39:37.725650 containerd[1894]: time="2026-01-13T23:39:37.725473498Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 13 23:39:37.725650 containerd[1894]: time="2026-01-13T23:39:37.725515378Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 13 23:39:37.725650 containerd[1894]: time="2026-01-13T23:39:37.725544706Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 13 23:39:37.725650 containerd[1894]: time="2026-01-13T23:39:37.725587582Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 13 23:39:37.725650 containerd[1894]: time="2026-01-13T23:39:37.725635510Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 13 23:39:37.726074 containerd[1894]: time="2026-01-13T23:39:37.725676982Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 13 23:39:37.726074 containerd[1894]: time="2026-01-13T23:39:37.725820058Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 13 23:39:37.726074 containerd[1894]: time="2026-01-13T23:39:37.725854126Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 13 23:39:37.734346 containerd[1894]: time="2026-01-13T23:39:37.734101990Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 13 23:39:37.736625 containerd[1894]: time="2026-01-13T23:39:37.736170394Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 13 23:39:37.736625 containerd[1894]: time="2026-01-13T23:39:37.736255138Z" level=info msg="Start snapshots syncer" Jan 13 23:39:37.736625 containerd[1894]: time="2026-01-13T23:39:37.736322026Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 13 23:39:37.752710 amazon-ssm-agent[1927]: 2026-01-13 23:39:37.6348 INFO https_proxy: Jan 13 23:39:37.754619 containerd[1894]: time="2026-01-13T23:39:37.747360418Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 13 23:39:37.755927 containerd[1894]: time="2026-01-13T23:39:37.755806198Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 13 23:39:37.761534 containerd[1894]: time="2026-01-13T23:39:37.760396738Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 13 23:39:37.761534 containerd[1894]: time="2026-01-13T23:39:37.760720858Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 13 23:39:37.761534 containerd[1894]: time="2026-01-13T23:39:37.760788478Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 13 23:39:37.761534 containerd[1894]: time="2026-01-13T23:39:37.760832314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 13 23:39:37.761534 containerd[1894]: time="2026-01-13T23:39:37.760864882Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 13 23:39:37.763073 containerd[1894]: time="2026-01-13T23:39:37.762979378Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 13 23:39:37.765077 containerd[1894]: time="2026-01-13T23:39:37.763097578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 13 23:39:37.765219 containerd[1894]: time="2026-01-13T23:39:37.765134134Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 13 23:39:37.767057 containerd[1894]: time="2026-01-13T23:39:37.765213130Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 13 23:39:37.767173 containerd[1894]: time="2026-01-13T23:39:37.767105206Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 13 23:39:37.767309 containerd[1894]: time="2026-01-13T23:39:37.767262838Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 13 23:39:37.767400 containerd[1894]: time="2026-01-13T23:39:37.767353786Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 13 23:39:37.769704 containerd[1894]: time="2026-01-13T23:39:37.767385970Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 13 23:39:37.769831 containerd[1894]: time="2026-01-13T23:39:37.769716298Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 13 23:39:37.769831 containerd[1894]: time="2026-01-13T23:39:37.769751614Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 13 23:39:37.769831 containerd[1894]: time="2026-01-13T23:39:37.769784182Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 13 23:39:37.769831 containerd[1894]: time="2026-01-13T23:39:37.769814410Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 13 23:39:37.774440 containerd[1894]: time="2026-01-13T23:39:37.774355966Z" level=info msg="runtime interface created" Jan 13 23:39:37.774440 containerd[1894]: time="2026-01-13T23:39:37.774422590Z" level=info msg="created NRI interface" Jan 13 23:39:37.774639 containerd[1894]: time="2026-01-13T23:39:37.774462334Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 13 23:39:37.774639 containerd[1894]: time="2026-01-13T23:39:37.774503266Z" level=info msg="Connect containerd service" Jan 13 23:39:37.774639 containerd[1894]: time="2026-01-13T23:39:37.774578122Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 23:39:37.798940 containerd[1894]: time="2026-01-13T23:39:37.790204906Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 23:39:37.854739 amazon-ssm-agent[1927]: 2026-01-13 23:39:37.6348 INFO http_proxy: Jan 13 23:39:37.955223 amazon-ssm-agent[1927]: 2026-01-13 23:39:37.6348 INFO no_proxy: Jan 13 23:39:38.060649 amazon-ssm-agent[1927]: 2026-01-13 23:39:37.6351 INFO Checking if agent identity type OnPrem can be assumed Jan 13 23:39:38.163952 amazon-ssm-agent[1927]: 2026-01-13 23:39:37.6352 INFO Checking if agent identity type EC2 can be assumed Jan 13 23:39:38.261011 amazon-ssm-agent[1927]: 2026-01-13 23:39:38.1076 INFO Agent will take identity from EC2 Jan 13 23:39:38.360205 amazon-ssm-agent[1927]: 2026-01-13 23:39:38.1154 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 13 23:39:38.444568 polkitd[2024]: Started polkitd version 126 Jan 13 23:39:38.459446 amazon-ssm-agent[1927]: 2026-01-13 23:39:38.1155 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 13 23:39:38.487155 locksmithd[1925]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 23:39:38.492950 containerd[1894]: time="2026-01-13T23:39:38.492050590Z" level=info msg="Start subscribing containerd event" Jan 13 23:39:38.492950 containerd[1894]: time="2026-01-13T23:39:38.492166126Z" level=info msg="Start recovering state" Jan 13 23:39:38.493520 containerd[1894]: time="2026-01-13T23:39:38.493371922Z" level=info msg="Start event monitor" Jan 13 23:39:38.493957 containerd[1894]: time="2026-01-13T23:39:38.493711006Z" level=info msg="Start cni network conf syncer for default" Jan 13 23:39:38.493957 containerd[1894]: time="2026-01-13T23:39:38.493744462Z" level=info msg="Start streaming server" Jan 13 23:39:38.495489 containerd[1894]: time="2026-01-13T23:39:38.493764250Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 13 23:39:38.495489 containerd[1894]: time="2026-01-13T23:39:38.494955382Z" level=info msg="runtime interface starting up..." Jan 13 23:39:38.495489 containerd[1894]: time="2026-01-13T23:39:38.494976190Z" level=info msg="starting plugins..." Jan 13 23:39:38.496938 containerd[1894]: time="2026-01-13T23:39:38.496607806Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 13 23:39:38.502091 containerd[1894]: time="2026-01-13T23:39:38.499501810Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 23:39:38.502091 containerd[1894]: time="2026-01-13T23:39:38.499620958Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 23:39:38.502278 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 23:39:38.511174 containerd[1894]: time="2026-01-13T23:39:38.508838746Z" level=info msg="containerd successfully booted in 0.887331s" Jan 13 23:39:38.518341 polkitd[2024]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 23:39:38.519035 polkitd[2024]: Loading rules from directory /run/polkit-1/rules.d Jan 13 23:39:38.519144 polkitd[2024]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 13 23:39:38.519807 polkitd[2024]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 13 23:39:38.521941 polkitd[2024]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 13 23:39:38.522073 polkitd[2024]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 23:39:38.525071 polkitd[2024]: Finished loading, compiling and executing 2 rules Jan 13 23:39:38.525656 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 23:39:38.530040 dbus-daemon[1840]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 23:39:38.530774 polkitd[2024]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 23:39:38.558700 amazon-ssm-agent[1927]: 2026-01-13 23:39:38.1155 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 23:39:38.580791 systemd-hostnamed[1901]: Hostname set to (transient) Jan 13 23:39:38.581025 systemd-resolved[1453]: System hostname changed to 'ip-172-31-24-127'. Jan 13 23:39:38.653102 amazon-ssm-agent[1927]: 2026/01/13 23:39:38 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:39:38.653102 amazon-ssm-agent[1927]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 23:39:38.654615 amazon-ssm-agent[1927]: 2026/01/13 23:39:38 processing appconfig overrides Jan 13 23:39:38.658505 amazon-ssm-agent[1927]: 2026-01-13 23:39:38.1155 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 13 23:39:38.693523 amazon-ssm-agent[1927]: 2026-01-13 23:39:38.1155 INFO [Registrar] Starting registrar module Jan 13 23:39:38.694013 amazon-ssm-agent[1927]: 2026-01-13 23:39:38.1292 INFO [EC2Identity] Checking disk for registration info Jan 13 23:39:38.694546 amazon-ssm-agent[1927]: 2026-01-13 23:39:38.1292 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 13 23:39:38.694546 amazon-ssm-agent[1927]: 2026-01-13 23:39:38.1292 INFO [EC2Identity] Generating registration keypair Jan 13 23:39:38.694546 amazon-ssm-agent[1927]: 2026-01-13 23:39:38.5955 INFO [EC2Identity] Checking write access before registering Jan 13 23:39:38.694546 amazon-ssm-agent[1927]: 2026-01-13 23:39:38.5970 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 13 23:39:38.695014 amazon-ssm-agent[1927]: 2026-01-13 23:39:38.6516 INFO [EC2Identity] EC2 registration was successful. Jan 13 23:39:38.695014 amazon-ssm-agent[1927]: 2026-01-13 23:39:38.6519 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 13 23:39:38.695014 amazon-ssm-agent[1927]: 2026-01-13 23:39:38.6525 INFO [CredentialRefresher] credentialRefresher has started Jan 13 23:39:38.695669 amazon-ssm-agent[1927]: 2026-01-13 23:39:38.6527 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 23:39:38.695669 amazon-ssm-agent[1927]: 2026-01-13 23:39:38.6918 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 23:39:38.695669 amazon-ssm-agent[1927]: 2026-01-13 23:39:38.6933 INFO [CredentialRefresher] Credentials ready Jan 13 23:39:38.757513 amazon-ssm-agent[1927]: 2026-01-13 23:39:38.6962 INFO [CredentialRefresher] Next credential rotation will be in 29.9999294271 minutes Jan 13 23:39:38.759781 sshd_keygen[1898]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 23:39:38.824458 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 23:39:38.833110 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 23:39:38.867117 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 23:39:38.868030 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 23:39:38.878113 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 23:39:38.898886 tar[1862]: linux-arm64/README.md Jan 13 23:39:38.930043 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 23:39:38.934426 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 23:39:38.944358 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 23:39:38.952588 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 23:39:38.960047 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 23:39:39.728254 amazon-ssm-agent[1927]: 2026-01-13 23:39:39.7237 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 23:39:39.829192 amazon-ssm-agent[1927]: 2026-01-13 23:39:39.7650 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2115) started Jan 13 23:39:39.929676 amazon-ssm-agent[1927]: 2026-01-13 23:39:39.7651 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 23:39:41.500448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:39:41.510048 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 23:39:41.513214 systemd[1]: Startup finished in 4.117s (kernel) + 12.016s (initrd) + 15.678s (userspace) = 31.812s. Jan 13 23:39:41.529960 (kubelet)[2130]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 23:39:42.714789 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 23:39:42.719394 systemd[1]: Started sshd@0-172.31.24.127:22-20.161.92.111:58676.service - OpenSSH per-connection server daemon (20.161.92.111:58676). Jan 13 23:39:43.377192 kubelet[2130]: E0113 23:39:43.377101 2130 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 23:39:43.381641 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 23:39:43.382400 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 23:39:43.383346 systemd[1]: kubelet.service: Consumed 1.389s CPU time, 249.3M memory peak. Jan 13 23:39:43.433935 sshd[2142]: Accepted publickey for core from 20.161.92.111 port 58676 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:39:43.437869 sshd-session[2142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:39:43.451815 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 23:39:43.454226 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 23:39:43.468923 systemd-logind[1857]: New session 1 of user core. Jan 13 23:39:43.487072 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 23:39:43.493874 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 23:39:43.518207 (systemd)[2150]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:39:43.524127 systemd-logind[1857]: New session 2 of user core. Jan 13 23:39:43.846737 systemd[2150]: Queued start job for default target default.target. Jan 13 23:39:43.858983 systemd[2150]: Created slice app.slice - User Application Slice. Jan 13 23:39:43.859056 systemd[2150]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 13 23:39:43.859088 systemd[2150]: Reached target paths.target - Paths. Jan 13 23:39:43.859193 systemd[2150]: Reached target timers.target - Timers. Jan 13 23:39:43.861673 systemd[2150]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 23:39:43.865209 systemd[2150]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 13 23:39:43.893687 systemd[2150]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 23:39:43.893879 systemd[2150]: Reached target sockets.target - Sockets. Jan 13 23:39:43.898097 systemd[2150]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 13 23:39:43.898327 systemd[2150]: Reached target basic.target - Basic System. Jan 13 23:39:43.898447 systemd[2150]: Reached target default.target - Main User Target. Jan 13 23:39:43.898523 systemd[2150]: Startup finished in 363ms. Jan 13 23:39:43.899008 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 23:39:43.909252 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 23:39:44.169728 systemd[1]: Started sshd@1-172.31.24.127:22-20.161.92.111:58690.service - OpenSSH per-connection server daemon (20.161.92.111:58690). Jan 13 23:39:44.653049 sshd[2164]: Accepted publickey for core from 20.161.92.111 port 58690 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:39:44.655715 sshd-session[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:39:44.665007 systemd-logind[1857]: New session 3 of user core. Jan 13 23:39:44.673276 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 23:39:44.895959 sshd[2168]: Connection closed by 20.161.92.111 port 58690 Jan 13 23:39:44.895519 sshd-session[2164]: pam_unix(sshd:session): session closed for user core Jan 13 23:39:44.902803 systemd-logind[1857]: Session 3 logged out. Waiting for processes to exit. Jan 13 23:39:44.903025 systemd[1]: sshd@1-172.31.24.127:22-20.161.92.111:58690.service: Deactivated successfully. Jan 13 23:39:44.907782 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 23:39:44.912299 systemd-logind[1857]: Removed session 3. Jan 13 23:39:44.991296 systemd[1]: Started sshd@2-172.31.24.127:22-20.161.92.111:58692.service - OpenSSH per-connection server daemon (20.161.92.111:58692). Jan 13 23:39:45.453599 sshd[2174]: Accepted publickey for core from 20.161.92.111 port 58692 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:39:45.456212 sshd-session[2174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:39:45.464825 systemd-logind[1857]: New session 4 of user core. Jan 13 23:39:45.474247 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 23:39:45.684797 sshd[2178]: Connection closed by 20.161.92.111 port 58692 Jan 13 23:39:45.685802 sshd-session[2174]: pam_unix(sshd:session): session closed for user core Jan 13 23:39:45.695371 systemd-logind[1857]: Session 4 logged out. Waiting for processes to exit. Jan 13 23:39:45.696161 systemd[1]: sshd@2-172.31.24.127:22-20.161.92.111:58692.service: Deactivated successfully. Jan 13 23:39:45.699882 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 23:39:45.703947 systemd-logind[1857]: Removed session 4. Jan 13 23:39:45.780290 systemd[1]: Started sshd@3-172.31.24.127:22-20.161.92.111:58702.service - OpenSSH per-connection server daemon (20.161.92.111:58702). Jan 13 23:39:46.253473 sshd[2184]: Accepted publickey for core from 20.161.92.111 port 58702 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:39:46.256060 sshd-session[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:39:46.265503 systemd-logind[1857]: New session 5 of user core. Jan 13 23:39:46.269203 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 23:39:46.495220 sshd[2188]: Connection closed by 20.161.92.111 port 58702 Jan 13 23:39:46.495650 sshd-session[2184]: pam_unix(sshd:session): session closed for user core Jan 13 23:39:46.505269 systemd[1]: sshd@3-172.31.24.127:22-20.161.92.111:58702.service: Deactivated successfully. Jan 13 23:39:46.509468 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 23:39:46.512037 systemd-logind[1857]: Session 5 logged out. Waiting for processes to exit. Jan 13 23:39:46.514984 systemd-logind[1857]: Removed session 5. Jan 13 23:39:46.589863 systemd[1]: Started sshd@4-172.31.24.127:22-20.161.92.111:58716.service - OpenSSH per-connection server daemon (20.161.92.111:58716). Jan 13 23:39:47.048009 sshd[2194]: Accepted publickey for core from 20.161.92.111 port 58716 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:39:47.050428 sshd-session[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:39:47.058995 systemd-logind[1857]: New session 6 of user core. Jan 13 23:39:47.068229 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 23:39:47.257405 sudo[2199]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 23:39:47.258126 sudo[2199]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 23:39:47.272945 sudo[2199]: pam_unix(sudo:session): session closed for user root Jan 13 23:39:47.350511 sshd[2198]: Connection closed by 20.161.92.111 port 58716 Jan 13 23:39:47.351498 sshd-session[2194]: pam_unix(sshd:session): session closed for user core Jan 13 23:39:47.358528 systemd[1]: sshd@4-172.31.24.127:22-20.161.92.111:58716.service: Deactivated successfully. Jan 13 23:39:47.361609 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 23:39:47.367612 systemd-logind[1857]: Session 6 logged out. Waiting for processes to exit. Jan 13 23:39:47.369841 systemd-logind[1857]: Removed session 6. Jan 13 23:39:47.446486 systemd[1]: Started sshd@5-172.31.24.127:22-20.161.92.111:58730.service - OpenSSH per-connection server daemon (20.161.92.111:58730). Jan 13 23:39:47.926961 sshd[2206]: Accepted publickey for core from 20.161.92.111 port 58730 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:39:47.929894 sshd-session[2206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:39:47.938722 systemd-logind[1857]: New session 7 of user core. Jan 13 23:39:47.947281 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 23:39:48.095857 sudo[2212]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 23:39:48.096679 sudo[2212]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 23:39:48.101068 sudo[2212]: pam_unix(sudo:session): session closed for user root Jan 13 23:39:48.113570 sudo[2211]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 23:39:48.114241 sudo[2211]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 23:39:48.127769 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 23:39:48.205000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 13 23:39:48.209131 kernel: kauditd_printk_skb: 142 callbacks suppressed Jan 13 23:39:48.209196 kernel: audit: type=1305 audit(1768347588.205:236): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 13 23:39:48.211248 augenrules[2236]: No rules Jan 13 23:39:48.205000 audit[2236]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffeea93e40 a2=420 a3=0 items=0 ppid=2217 pid=2236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:48.220217 kernel: audit: type=1300 audit(1768347588.205:236): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffeea93e40 a2=420 a3=0 items=0 ppid=2217 pid=2236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:48.213175 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 23:39:48.213649 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 23:39:48.205000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 13 23:39:48.224495 kernel: audit: type=1327 audit(1768347588.205:236): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 13 23:39:48.221746 sudo[2211]: pam_unix(sudo:session): session closed for user root Jan 13 23:39:48.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:48.230097 kernel: audit: type=1130 audit(1768347588.211:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:48.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:48.234738 kernel: audit: type=1131 audit(1768347588.211:238): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:48.219000 audit[2211]: USER_END pid=2211 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:39:48.219000 audit[2211]: CRED_DISP pid=2211 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:39:48.244794 kernel: audit: type=1106 audit(1768347588.219:239): pid=2211 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:39:48.244875 kernel: audit: type=1104 audit(1768347588.219:240): pid=2211 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:39:48.303012 sshd[2210]: Connection closed by 20.161.92.111 port 58730 Jan 13 23:39:48.301954 sshd-session[2206]: pam_unix(sshd:session): session closed for user core Jan 13 23:39:48.303000 audit[2206]: USER_END pid=2206 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:39:48.317415 kernel: audit: type=1106 audit(1768347588.303:241): pid=2206 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:39:48.317533 kernel: audit: type=1104 audit(1768347588.303:242): pid=2206 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:39:48.303000 audit[2206]: CRED_DISP pid=2206 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:39:48.311534 systemd[1]: sshd@5-172.31.24.127:22-20.161.92.111:58730.service: Deactivated successfully. Jan 13 23:39:48.315781 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 23:39:48.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.24.127:22-20.161.92.111:58730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:48.324327 kernel: audit: type=1131 audit(1768347588.311:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.24.127:22-20.161.92.111:58730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:48.323805 systemd-logind[1857]: Session 7 logged out. Waiting for processes to exit. Jan 13 23:39:48.326131 systemd-logind[1857]: Removed session 7. Jan 13 23:39:48.397996 systemd[1]: Started sshd@6-172.31.24.127:22-20.161.92.111:58736.service - OpenSSH per-connection server daemon (20.161.92.111:58736). Jan 13 23:39:48.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.24.127:22-20.161.92.111:58736 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:48.854000 audit[2245]: USER_ACCT pid=2245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:39:48.855759 sshd[2245]: Accepted publickey for core from 20.161.92.111 port 58736 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:39:48.857000 audit[2245]: CRED_ACQ pid=2245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:39:48.857000 audit[2245]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff5b16990 a2=3 a3=0 items=0 ppid=1 pid=2245 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:48.857000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:39:48.859121 sshd-session[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:39:48.870017 systemd-logind[1857]: New session 8 of user core. Jan 13 23:39:48.877263 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 23:39:48.882000 audit[2245]: USER_START pid=2245 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:39:48.886000 audit[2249]: CRED_ACQ pid=2249 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:39:49.021000 audit[2250]: USER_ACCT pid=2250 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:39:49.022892 sudo[2250]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 23:39:49.022000 audit[2250]: CRED_REFR pid=2250 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:39:49.023685 sudo[2250]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 23:39:49.022000 audit[2250]: USER_START pid=2250 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:39:50.265700 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 23:39:50.281822 (dockerd)[2268]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 23:39:51.380524 dockerd[2268]: time="2026-01-13T23:39:51.380258069Z" level=info msg="Starting up" Jan 13 23:39:51.383727 dockerd[2268]: time="2026-01-13T23:39:51.383386436Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 13 23:39:51.410287 dockerd[2268]: time="2026-01-13T23:39:51.410193853Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 13 23:39:51.510557 dockerd[2268]: time="2026-01-13T23:39:51.510486443Z" level=info msg="Loading containers: start." Jan 13 23:39:51.559974 kernel: Initializing XFRM netlink socket Jan 13 23:39:51.686000 audit[2318]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2318 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:39:51.686000 audit[2318]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffffe961910 a2=0 a3=0 items=0 ppid=2268 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.686000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 13 23:39:51.692000 audit[2320]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:39:51.692000 audit[2320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc56e0960 a2=0 a3=0 items=0 ppid=2268 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.692000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 13 23:39:51.697000 audit[2322]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2322 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:39:51.697000 audit[2322]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdc26d130 a2=0 a3=0 items=0 ppid=2268 pid=2322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.697000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 13 23:39:51.702000 audit[2324]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2324 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:39:51.702000 audit[2324]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff498b600 a2=0 a3=0 items=0 ppid=2268 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.702000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 13 23:39:51.707000 audit[2326]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:39:51.707000 audit[2326]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdbd1cfb0 a2=0 a3=0 items=0 ppid=2268 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.707000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 13 23:39:51.712000 audit[2328]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2328 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:39:51.712000 audit[2328]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffeadd83d0 a2=0 a3=0 items=0 ppid=2268 pid=2328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.712000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 13 23:39:51.718000 audit[2330]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2330 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:39:51.718000 audit[2330]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff375e440 a2=0 a3=0 items=0 ppid=2268 pid=2330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.718000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 13 23:39:51.724000 audit[2332]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2332 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:39:51.724000 audit[2332]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffd4928ae0 a2=0 a3=0 items=0 ppid=2268 pid=2332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.724000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 13 23:39:51.799000 audit[2335]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2335 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:39:51.799000 audit[2335]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffe65026e0 a2=0 a3=0 items=0 ppid=2268 pid=2335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.799000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 13 23:39:51.805000 audit[2337]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2337 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:39:51.805000 audit[2337]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe5ac77c0 a2=0 a3=0 items=0 ppid=2268 pid=2337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.805000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 13 23:39:51.811000 audit[2339]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2339 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:39:51.811000 audit[2339]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffd80fa600 a2=0 a3=0 items=0 ppid=2268 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.811000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 13 23:39:51.816000 audit[2341]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2341 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:39:51.816000 audit[2341]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffd7938270 a2=0 a3=0 items=0 ppid=2268 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.816000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 13 23:39:51.821000 audit[2343]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2343 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:39:51.821000 audit[2343]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffe54e4560 a2=0 a3=0 items=0 ppid=2268 pid=2343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.821000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 13 23:39:51.901000 audit[2373]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2373 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:39:51.901000 audit[2373]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffd098e230 a2=0 a3=0 items=0 ppid=2268 pid=2373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.901000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 13 23:39:51.906000 audit[2375]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2375 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:39:51.906000 audit[2375]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc28a7840 a2=0 a3=0 items=0 ppid=2268 pid=2375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.906000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 13 23:39:51.911000 audit[2377]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2377 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:39:51.911000 audit[2377]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffed8d3290 a2=0 a3=0 items=0 ppid=2268 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.911000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 13 23:39:51.916000 audit[2379]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2379 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:39:51.916000 audit[2379]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff97a4a00 a2=0 a3=0 items=0 ppid=2268 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.916000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 13 23:39:51.921000 audit[2381]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2381 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:39:51.921000 audit[2381]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd3a8d070 a2=0 a3=0 items=0 ppid=2268 pid=2381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.921000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 13 23:39:51.925000 audit[2383]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2383 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:39:51.925000 audit[2383]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd8057310 a2=0 a3=0 items=0 ppid=2268 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.925000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 13 23:39:51.930000 audit[2385]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:39:51.930000 audit[2385]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffffbc33510 a2=0 a3=0 items=0 ppid=2268 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.930000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 13 23:39:51.935000 audit[2387]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2387 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:39:51.935000 audit[2387]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffdc0b5970 a2=0 a3=0 items=0 ppid=2268 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.935000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 13 23:39:51.941000 audit[2389]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2389 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:39:51.941000 audit[2389]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffc2b3b830 a2=0 a3=0 items=0 ppid=2268 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.941000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 13 23:39:51.947000 audit[2391]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:39:51.947000 audit[2391]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff70053d0 a2=0 a3=0 items=0 ppid=2268 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.947000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 13 23:39:51.952000 audit[2393]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2393 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:39:51.952000 audit[2393]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffe7508040 a2=0 a3=0 items=0 ppid=2268 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.952000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 13 23:39:51.958000 audit[2395]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2395 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:39:51.958000 audit[2395]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffd666b0d0 a2=0 a3=0 items=0 ppid=2268 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.958000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 13 23:39:51.963000 audit[2397]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2397 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:39:51.963000 audit[2397]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffe18a6510 a2=0 a3=0 items=0 ppid=2268 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.963000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 13 23:39:51.976000 audit[2402]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2402 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:39:51.976000 audit[2402]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe05bc810 a2=0 a3=0 items=0 ppid=2268 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.976000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 13 23:39:51.981000 audit[2404]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2404 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:39:51.981000 audit[2404]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffe43d0810 a2=0 a3=0 items=0 ppid=2268 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.981000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 13 23:39:51.986000 audit[2406]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2406 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:39:51.986000 audit[2406]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd69bfca0 a2=0 a3=0 items=0 ppid=2268 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.986000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 13 23:39:51.992000 audit[2408]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2408 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:39:51.992000 audit[2408]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcdf55310 a2=0 a3=0 items=0 ppid=2268 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.992000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 13 23:39:51.997000 audit[2410]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2410 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:39:51.997000 audit[2410]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff21ad2a0 a2=0 a3=0 items=0 ppid=2268 pid=2410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:51.997000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 13 23:39:52.002000 audit[2412]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2412 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:39:52.002000 audit[2412]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffff10db670 a2=0 a3=0 items=0 ppid=2268 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:52.002000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 13 23:39:52.031490 (udev-worker)[2291]: Network interface NamePolicy= disabled on kernel command line. Jan 13 23:39:52.040000 audit[2419]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2419 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:39:52.040000 audit[2419]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffe0e44080 a2=0 a3=0 items=0 ppid=2268 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:52.040000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 13 23:39:52.046000 audit[2421]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2421 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:39:52.046000 audit[2421]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=fffffd2b4210 a2=0 a3=0 items=0 ppid=2268 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:52.046000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 13 23:39:52.072000 audit[2429]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2429 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:39:52.072000 audit[2429]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=fffffdabcd10 a2=0 a3=0 items=0 ppid=2268 pid=2429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:52.072000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 13 23:39:52.093000 audit[2435]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2435 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:39:52.093000 audit[2435]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffeced47f0 a2=0 a3=0 items=0 ppid=2268 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:52.093000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 13 23:39:52.100000 audit[2437]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2437 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:39:52.100000 audit[2437]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffdd6b72d0 a2=0 a3=0 items=0 ppid=2268 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:52.100000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 13 23:39:52.105000 audit[2439]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2439 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:39:52.105000 audit[2439]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffcd29aad0 a2=0 a3=0 items=0 ppid=2268 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:52.105000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 13 23:39:52.110000 audit[2441]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2441 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:39:52.110000 audit[2441]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffe695eb10 a2=0 a3=0 items=0 ppid=2268 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:52.110000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 13 23:39:52.116000 audit[2443]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:39:52.116000 audit[2443]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd3d6bb30 a2=0 a3=0 items=0 ppid=2268 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:39:52.116000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 13 23:39:52.118376 systemd-networkd[1617]: docker0: Link UP Jan 13 23:39:52.131089 dockerd[2268]: time="2026-01-13T23:39:52.130983721Z" level=info msg="Loading containers: done." Jan 13 23:39:52.194752 dockerd[2268]: time="2026-01-13T23:39:52.194578999Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 23:39:52.194752 dockerd[2268]: time="2026-01-13T23:39:52.194714079Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 13 23:39:52.195516 dockerd[2268]: time="2026-01-13T23:39:52.195459856Z" level=info msg="Initializing buildkit" Jan 13 23:39:52.255422 dockerd[2268]: time="2026-01-13T23:39:52.255028705Z" level=info msg="Completed buildkit initialization" Jan 13 23:39:52.269430 dockerd[2268]: time="2026-01-13T23:39:52.269316542Z" level=info msg="Daemon has completed initialization" Jan 13 23:39:52.269751 dockerd[2268]: time="2026-01-13T23:39:52.269510715Z" level=info msg="API listen on /run/docker.sock" Jan 13 23:39:52.272856 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 23:39:52.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:52.441047 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1365280913-merged.mount: Deactivated successfully. Jan 13 23:39:53.632587 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 23:39:53.637269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:39:54.089115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:39:54.091031 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 13 23:39:54.091118 kernel: audit: type=1130 audit(1768347594.088:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:54.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:39:54.123657 (kubelet)[2490]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 23:39:54.196423 kubelet[2490]: E0113 23:39:54.196329 2490 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 23:39:54.203480 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 23:39:54.203795 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 23:39:54.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 13 23:39:54.204918 systemd[1]: kubelet.service: Consumed 328ms CPU time, 106.6M memory peak. Jan 13 23:39:54.211924 kernel: audit: type=1131 audit(1768347594.202:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 13 23:39:54.338655 containerd[1894]: time="2026-01-13T23:39:54.338581560Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 13 23:39:55.260661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3682634329.mount: Deactivated successfully. Jan 13 23:39:56.594356 containerd[1894]: time="2026-01-13T23:39:56.594239237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:39:56.596577 containerd[1894]: time="2026-01-13T23:39:56.596448845Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=22974850" Jan 13 23:39:56.599407 containerd[1894]: time="2026-01-13T23:39:56.598469047Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:39:56.606131 containerd[1894]: time="2026-01-13T23:39:56.606068809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:39:56.608499 containerd[1894]: time="2026-01-13T23:39:56.608429092Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 2.269780347s" Jan 13 23:39:56.608764 containerd[1894]: time="2026-01-13T23:39:56.608720262Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Jan 13 23:39:56.609757 containerd[1894]: time="2026-01-13T23:39:56.609701008Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 13 23:39:57.928456 containerd[1894]: time="2026-01-13T23:39:57.928377518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:39:57.931114 containerd[1894]: time="2026-01-13T23:39:57.931029079Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19127323" Jan 13 23:39:57.932534 containerd[1894]: time="2026-01-13T23:39:57.932484254Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:39:57.937515 containerd[1894]: time="2026-01-13T23:39:57.937430570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:39:57.939791 containerd[1894]: time="2026-01-13T23:39:57.939369311Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.328838509s" Jan 13 23:39:57.939791 containerd[1894]: time="2026-01-13T23:39:57.939446606Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Jan 13 23:39:57.940529 containerd[1894]: time="2026-01-13T23:39:57.940303534Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 13 23:39:58.943665 containerd[1894]: time="2026-01-13T23:39:58.943562456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:39:58.947189 containerd[1894]: time="2026-01-13T23:39:58.947081906Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=0" Jan 13 23:39:58.949114 containerd[1894]: time="2026-01-13T23:39:58.948996984Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:39:58.954947 containerd[1894]: time="2026-01-13T23:39:58.954544668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:39:58.957647 containerd[1894]: time="2026-01-13T23:39:58.957565138Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.017198476s" Jan 13 23:39:58.957949 containerd[1894]: time="2026-01-13T23:39:58.957878050Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Jan 13 23:39:58.959325 containerd[1894]: time="2026-01-13T23:39:58.959236469Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 13 23:40:00.250679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount106251321.mount: Deactivated successfully. Jan 13 23:40:00.641359 containerd[1894]: time="2026-01-13T23:40:00.640118093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:40:00.642137 containerd[1894]: time="2026-01-13T23:40:00.642068192Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=0" Jan 13 23:40:00.644515 containerd[1894]: time="2026-01-13T23:40:00.644469728Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:40:00.648646 containerd[1894]: time="2026-01-13T23:40:00.648596094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:40:00.652831 containerd[1894]: time="2026-01-13T23:40:00.652735247Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.693219314s" Jan 13 23:40:00.652831 containerd[1894]: time="2026-01-13T23:40:00.652817212Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Jan 13 23:40:00.653981 containerd[1894]: time="2026-01-13T23:40:00.653913936Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 13 23:40:01.306284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2499611831.mount: Deactivated successfully. Jan 13 23:40:02.610232 containerd[1894]: time="2026-01-13T23:40:02.610140950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:40:02.612051 containerd[1894]: time="2026-01-13T23:40:02.611944420Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=97" Jan 13 23:40:02.614951 containerd[1894]: time="2026-01-13T23:40:02.614759070Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:40:02.622770 containerd[1894]: time="2026-01-13T23:40:02.622653315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:40:02.625994 containerd[1894]: time="2026-01-13T23:40:02.624993333Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.971011083s" Jan 13 23:40:02.625994 containerd[1894]: time="2026-01-13T23:40:02.625083102Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Jan 13 23:40:02.626946 containerd[1894]: time="2026-01-13T23:40:02.626448772Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 13 23:40:03.160754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2387978489.mount: Deactivated successfully. Jan 13 23:40:03.175147 containerd[1894]: time="2026-01-13T23:40:03.175058708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:40:03.177732 containerd[1894]: time="2026-01-13T23:40:03.177171308Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 13 23:40:03.180090 containerd[1894]: time="2026-01-13T23:40:03.180018483Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:40:03.188334 containerd[1894]: time="2026-01-13T23:40:03.188254178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:40:03.190370 containerd[1894]: time="2026-01-13T23:40:03.190307180Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 563.805521ms" Jan 13 23:40:03.190756 containerd[1894]: time="2026-01-13T23:40:03.190580581Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Jan 13 23:40:03.192025 containerd[1894]: time="2026-01-13T23:40:03.191962171Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 13 23:40:03.787813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4003385338.mount: Deactivated successfully. Jan 13 23:40:04.227696 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 23:40:04.234504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:40:05.133146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:40:05.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:40:05.140026 kernel: audit: type=1130 audit(1768347605.132:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:40:05.150522 (kubelet)[2680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 23:40:05.306925 kubelet[2680]: E0113 23:40:05.306831 2680 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 23:40:05.326022 kernel: audit: type=1131 audit(1768347605.311:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 13 23:40:05.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 13 23:40:05.311256 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 23:40:05.311542 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 23:40:05.312285 systemd[1]: kubelet.service: Consumed 347ms CPU time, 107M memory peak. Jan 13 23:40:07.127144 containerd[1894]: time="2026-01-13T23:40:07.127080215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:40:07.129158 containerd[1894]: time="2026-01-13T23:40:07.129072202Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=85821047" Jan 13 23:40:07.130943 containerd[1894]: time="2026-01-13T23:40:07.129675744Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:40:07.136528 containerd[1894]: time="2026-01-13T23:40:07.136471044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:40:07.138817 containerd[1894]: time="2026-01-13T23:40:07.138757911Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.946728625s" Jan 13 23:40:07.139062 containerd[1894]: time="2026-01-13T23:40:07.139025608Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Jan 13 23:40:08.617991 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 23:40:08.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:40:08.632615 kernel: audit: type=1131 audit(1768347608.621:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:40:08.642000 audit: BPF prog-id=65 op=UNLOAD Jan 13 23:40:08.644928 kernel: audit: type=1334 audit(1768347608.642:299): prog-id=65 op=UNLOAD Jan 13 23:40:15.477151 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 23:40:15.483242 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:40:15.849321 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:40:15.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:40:15.858950 kernel: audit: type=1130 audit(1768347615.848:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:40:15.865558 (kubelet)[2728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 23:40:15.945966 kubelet[2728]: E0113 23:40:15.945251 2728 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 23:40:15.951453 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 23:40:15.951736 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 23:40:15.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 13 23:40:15.957063 systemd[1]: kubelet.service: Consumed 307ms CPU time, 106.6M memory peak. Jan 13 23:40:15.958938 kernel: audit: type=1131 audit(1768347615.951:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 13 23:40:18.016276 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:40:18.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:40:18.017313 systemd[1]: kubelet.service: Consumed 307ms CPU time, 106.6M memory peak. Jan 13 23:40:18.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:40:18.030319 kernel: audit: type=1130 audit(1768347618.016:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:40:18.030441 kernel: audit: type=1131 audit(1768347618.016:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:40:18.026232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:40:18.104127 systemd[1]: Reload requested from client PID 2743 ('systemctl') (unit session-8.scope)... Jan 13 23:40:18.104151 systemd[1]: Reloading... Jan 13 23:40:18.308165 zram_generator::config[2793]: No configuration found. Jan 13 23:40:18.800401 systemd[1]: Reloading finished in 695 ms. Jan 13 23:40:18.842000 audit: BPF prog-id=69 op=LOAD Jan 13 23:40:18.842000 audit: BPF prog-id=46 op=UNLOAD Jan 13 23:40:18.850991 kernel: audit: type=1334 audit(1768347618.842:304): prog-id=69 op=LOAD Jan 13 23:40:18.851118 kernel: audit: type=1334 audit(1768347618.842:305): prog-id=46 op=UNLOAD Jan 13 23:40:18.851181 kernel: audit: type=1334 audit(1768347618.844:306): prog-id=70 op=LOAD Jan 13 23:40:18.851227 kernel: audit: type=1334 audit(1768347618.844:307): prog-id=71 op=LOAD Jan 13 23:40:18.844000 audit: BPF prog-id=70 op=LOAD Jan 13 23:40:18.844000 audit: BPF prog-id=71 op=LOAD Jan 13 23:40:18.855045 kernel: audit: type=1334 audit(1768347618.844:308): prog-id=57 op=UNLOAD Jan 13 23:40:18.844000 audit: BPF prog-id=57 op=UNLOAD Jan 13 23:40:18.844000 audit: BPF prog-id=58 op=UNLOAD Jan 13 23:40:18.859013 kernel: audit: type=1334 audit(1768347618.844:309): prog-id=58 op=UNLOAD Jan 13 23:40:18.849000 audit: BPF prog-id=72 op=LOAD Jan 13 23:40:18.849000 audit: BPF prog-id=59 op=UNLOAD Jan 13 23:40:18.849000 audit: BPF prog-id=73 op=LOAD Jan 13 23:40:18.849000 audit: BPF prog-id=74 op=LOAD Jan 13 23:40:18.849000 audit: BPF prog-id=60 op=UNLOAD Jan 13 23:40:18.849000 audit: BPF prog-id=61 op=UNLOAD Jan 13 23:40:18.855000 audit: BPF prog-id=75 op=LOAD Jan 13 23:40:18.855000 audit: BPF prog-id=54 op=UNLOAD Jan 13 23:40:18.855000 audit: BPF prog-id=76 op=LOAD Jan 13 23:40:18.855000 audit: BPF prog-id=77 op=LOAD Jan 13 23:40:18.855000 audit: BPF prog-id=55 op=UNLOAD Jan 13 23:40:18.855000 audit: BPF prog-id=56 op=UNLOAD Jan 13 23:40:18.856000 audit: BPF prog-id=78 op=LOAD Jan 13 23:40:18.856000 audit: BPF prog-id=68 op=UNLOAD Jan 13 23:40:18.859000 audit: BPF prog-id=79 op=LOAD Jan 13 23:40:18.859000 audit: BPF prog-id=51 op=UNLOAD Jan 13 23:40:18.859000 audit: BPF prog-id=80 op=LOAD Jan 13 23:40:18.859000 audit: BPF prog-id=81 op=LOAD Jan 13 23:40:18.859000 audit: BPF prog-id=52 op=UNLOAD Jan 13 23:40:18.859000 audit: BPF prog-id=53 op=UNLOAD Jan 13 23:40:18.862000 audit: BPF prog-id=82 op=LOAD Jan 13 23:40:18.864000 audit: BPF prog-id=50 op=UNLOAD Jan 13 23:40:18.869000 audit: BPF prog-id=83 op=LOAD Jan 13 23:40:18.869000 audit: BPF prog-id=47 op=UNLOAD Jan 13 23:40:18.869000 audit: BPF prog-id=84 op=LOAD Jan 13 23:40:18.869000 audit: BPF prog-id=85 op=LOAD Jan 13 23:40:18.869000 audit: BPF prog-id=48 op=UNLOAD Jan 13 23:40:18.869000 audit: BPF prog-id=49 op=UNLOAD Jan 13 23:40:18.887000 audit: BPF prog-id=86 op=LOAD Jan 13 23:40:18.887000 audit: BPF prog-id=62 op=UNLOAD Jan 13 23:40:18.888000 audit: BPF prog-id=87 op=LOAD Jan 13 23:40:18.888000 audit: BPF prog-id=88 op=LOAD Jan 13 23:40:18.888000 audit: BPF prog-id=63 op=UNLOAD Jan 13 23:40:18.888000 audit: BPF prog-id=64 op=UNLOAD Jan 13 23:40:18.907503 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 23:40:18.907690 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 23:40:18.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 13 23:40:18.909020 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:40:18.909125 systemd[1]: kubelet.service: Consumed 223ms CPU time, 95.4M memory peak. Jan 13 23:40:18.912342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:40:19.590955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:40:19.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:40:19.606414 (kubelet)[2853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 23:40:19.679928 kubelet[2853]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 13 23:40:19.680374 kubelet[2853]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 23:40:19.680436 kubelet[2853]: I0113 23:40:19.680381 2853 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 23:40:21.076925 kubelet[2853]: I0113 23:40:21.076669 2853 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 13 23:40:21.079021 kubelet[2853]: I0113 23:40:21.078976 2853 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 23:40:21.081938 kubelet[2853]: I0113 23:40:21.081530 2853 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 13 23:40:21.081938 kubelet[2853]: I0113 23:40:21.081571 2853 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 13 23:40:21.083392 kubelet[2853]: I0113 23:40:21.083325 2853 server.go:956] "Client rotation is on, will bootstrap in background" Jan 13 23:40:21.096477 kubelet[2853]: E0113 23:40:21.096407 2853 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.24.127:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.127:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 13 23:40:21.098288 kubelet[2853]: I0113 23:40:21.098234 2853 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 23:40:21.110942 kubelet[2853]: I0113 23:40:21.109828 2853 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 13 23:40:21.116754 kubelet[2853]: I0113 23:40:21.116708 2853 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 13 23:40:21.117254 kubelet[2853]: I0113 23:40:21.117200 2853 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 23:40:21.117499 kubelet[2853]: I0113 23:40:21.117254 2853 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-127","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 23:40:21.117673 kubelet[2853]: I0113 23:40:21.117501 2853 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 23:40:21.117673 kubelet[2853]: I0113 23:40:21.117519 2853 container_manager_linux.go:306] "Creating device plugin manager" Jan 13 23:40:21.117794 kubelet[2853]: I0113 23:40:21.117690 2853 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 13 23:40:21.124264 kubelet[2853]: I0113 23:40:21.124219 2853 state_mem.go:36] "Initialized new in-memory state store" Jan 13 23:40:21.127942 kubelet[2853]: I0113 23:40:21.126606 2853 kubelet.go:475] "Attempting to sync node with API server" Jan 13 23:40:21.127942 kubelet[2853]: I0113 23:40:21.126662 2853 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 23:40:21.127942 kubelet[2853]: I0113 23:40:21.126714 2853 kubelet.go:387] "Adding apiserver pod source" Jan 13 23:40:21.127942 kubelet[2853]: I0113 23:40:21.126745 2853 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 23:40:21.128855 kubelet[2853]: E0113 23:40:21.128797 2853 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.24.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-127&limit=500&resourceVersion=0\": dial tcp 172.31.24.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 13 23:40:21.129092 kubelet[2853]: E0113 23:40:21.129009 2853 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 13 23:40:21.130251 kubelet[2853]: I0113 23:40:21.130205 2853 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 13 23:40:21.131372 kubelet[2853]: I0113 23:40:21.131325 2853 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 13 23:40:21.131455 kubelet[2853]: I0113 23:40:21.131388 2853 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 13 23:40:21.131455 kubelet[2853]: W0113 23:40:21.131441 2853 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 23:40:21.136764 kubelet[2853]: I0113 23:40:21.136719 2853 server.go:1262] "Started kubelet" Jan 13 23:40:21.141021 kubelet[2853]: I0113 23:40:21.140971 2853 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 23:40:21.143291 kubelet[2853]: I0113 23:40:21.143247 2853 server.go:310] "Adding debug handlers to kubelet server" Jan 13 23:40:21.158459 kubelet[2853]: I0113 23:40:21.158366 2853 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 23:40:21.158622 kubelet[2853]: I0113 23:40:21.158483 2853 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 13 23:40:21.160939 kubelet[2853]: I0113 23:40:21.158879 2853 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 23:40:21.160939 kubelet[2853]: E0113 23:40:21.157440 2853 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.127:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.127:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-127.188a6ecad5423d5a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-127,UID:ip-172-31-24-127,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-127,},FirstTimestamp:2026-01-13 23:40:21.136678234 +0000 UTC m=+1.524522973,LastTimestamp:2026-01-13 23:40:21.136678234 +0000 UTC m=+1.524522973,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-127,}" Jan 13 23:40:21.160939 kubelet[2853]: I0113 23:40:21.160571 2853 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 23:40:21.163152 kubelet[2853]: I0113 23:40:21.163114 2853 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 23:40:21.166714 kubelet[2853]: I0113 23:40:21.166688 2853 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 13 23:40:21.167121 kubelet[2853]: E0113 23:40:21.167093 2853 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-24-127\" not found" Jan 13 23:40:21.168208 kubelet[2853]: I0113 23:40:21.168177 2853 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 23:40:21.168592 kubelet[2853]: I0113 23:40:21.168570 2853 reconciler.go:29] "Reconciler: start to sync state" Jan 13 23:40:21.171890 kubelet[2853]: E0113 23:40:21.171843 2853 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 13 23:40:21.172276 kubelet[2853]: E0113 23:40:21.172230 2853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-127?timeout=10s\": dial tcp 172.31.24.127:6443: connect: connection refused" interval="200ms" Jan 13 23:40:21.174987 kubelet[2853]: I0113 23:40:21.174947 2853 factory.go:223] Registration of the systemd container factory successfully Jan 13 23:40:21.175342 kubelet[2853]: I0113 23:40:21.175308 2853 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 23:40:21.177374 kubelet[2853]: I0113 23:40:21.177339 2853 factory.go:223] Registration of the containerd container factory successfully Jan 13 23:40:21.180000 audit[2868]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2868 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:21.182746 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 13 23:40:21.182842 kernel: audit: type=1325 audit(1768347621.180:346): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2868 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:21.180000 audit[2868]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdfbc26c0 a2=0 a3=0 items=0 ppid=2853 pid=2868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:21.192679 kernel: audit: type=1300 audit(1768347621.180:346): arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdfbc26c0 a2=0 a3=0 items=0 ppid=2853 pid=2868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:21.180000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 13 23:40:21.195759 kernel: audit: type=1327 audit(1768347621.180:346): proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 13 23:40:21.198000 audit[2871]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2871 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:21.198000 audit[2871]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc76aeed0 a2=0 a3=0 items=0 ppid=2853 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:21.209852 kernel: audit: type=1325 audit(1768347621.198:347): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2871 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:21.209977 kernel: audit: type=1300 audit(1768347621.198:347): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc76aeed0 a2=0 a3=0 items=0 ppid=2853 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:21.198000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 13 23:40:21.213138 kernel: audit: type=1327 audit(1768347621.198:347): proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 13 23:40:21.209000 audit[2874]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2874 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:21.216164 kubelet[2853]: I0113 23:40:21.216127 2853 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 13 23:40:21.216348 kubelet[2853]: I0113 23:40:21.216323 2853 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 13 23:40:21.216482 kubelet[2853]: I0113 23:40:21.216462 2853 state_mem.go:36] "Initialized new in-memory state store" Jan 13 23:40:21.216894 kernel: audit: type=1325 audit(1768347621.209:348): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2874 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:21.209000 audit[2874]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc9254050 a2=0 a3=0 items=0 ppid=2853 pid=2874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:21.220927 kubelet[2853]: I0113 23:40:21.220874 2853 policy_none.go:49] "None policy: Start" Jan 13 23:40:21.221466 kubelet[2853]: I0113 23:40:21.221070 2853 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 13 23:40:21.221466 kubelet[2853]: I0113 23:40:21.221103 2853 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 13 23:40:21.223355 kernel: audit: type=1300 audit(1768347621.209:348): arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc9254050 a2=0 a3=0 items=0 ppid=2853 pid=2874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:21.223481 kernel: audit: type=1327 audit(1768347621.209:348): proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 13 23:40:21.209000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 13 23:40:21.226064 kubelet[2853]: I0113 23:40:21.226030 2853 policy_none.go:47] "Start" Jan 13 23:40:21.222000 audit[2876]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:21.229750 kernel: audit: type=1325 audit(1768347621.222:349): table=filter:45 family=2 entries=2 op=nft_register_chain pid=2876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:21.222000 audit[2876]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffdc5ea540 a2=0 a3=0 items=0 ppid=2853 pid=2876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:21.222000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 13 23:40:21.240120 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 23:40:21.247000 audit[2879]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2879 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:21.247000 audit[2879]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff5ac93c0 a2=0 a3=0 items=0 ppid=2853 pid=2879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:21.247000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Jan 13 23:40:21.249044 kubelet[2853]: I0113 23:40:21.249001 2853 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 13 23:40:21.250000 audit[2883]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2883 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:21.250000 audit[2883]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe34a24a0 a2=0 a3=0 items=0 ppid=2853 pid=2883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:21.250000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 13 23:40:21.252000 audit[2882]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2882 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:21.252000 audit[2882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd00354d0 a2=0 a3=0 items=0 ppid=2853 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:21.255206 kubelet[2853]: I0113 23:40:21.255024 2853 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 13 23:40:21.255206 kubelet[2853]: I0113 23:40:21.255093 2853 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 13 23:40:21.252000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 13 23:40:21.256891 kubelet[2853]: I0113 23:40:21.256847 2853 kubelet.go:2427] "Starting kubelet main sync loop" Jan 13 23:40:21.257019 kubelet[2853]: E0113 23:40:21.256969 2853 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 23:40:21.257771 kubelet[2853]: E0113 23:40:21.257715 2853 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.24.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 13 23:40:21.258000 audit[2884]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2884 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:21.258000 audit[2884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff1c78ce0 a2=0 a3=0 items=0 ppid=2853 pid=2884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:21.258000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 13 23:40:21.261000 audit[2885]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2885 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:21.261000 audit[2885]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff87e2660 a2=0 a3=0 items=0 ppid=2853 pid=2885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:21.261000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 13 23:40:21.263339 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 23:40:21.265000 audit[2887]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2887 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:21.265000 audit[2887]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeaa27460 a2=0 a3=0 items=0 ppid=2853 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:21.265000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 13 23:40:21.267631 kubelet[2853]: E0113 23:40:21.267327 2853 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-24-127\" not found" Jan 13 23:40:21.268000 audit[2886]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2886 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:21.268000 audit[2886]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcdd34ce0 a2=0 a3=0 items=0 ppid=2853 pid=2886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:21.268000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 13 23:40:21.272000 audit[2888]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2888 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:21.272000 audit[2888]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc1e9be10 a2=0 a3=0 items=0 ppid=2853 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:21.272000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 13 23:40:21.274716 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 23:40:21.288136 kubelet[2853]: E0113 23:40:21.288074 2853 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 13 23:40:21.288405 kubelet[2853]: I0113 23:40:21.288375 2853 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 23:40:21.288496 kubelet[2853]: I0113 23:40:21.288407 2853 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 23:40:21.290925 kubelet[2853]: E0113 23:40:21.290706 2853 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 13 23:40:21.290925 kubelet[2853]: E0113 23:40:21.290780 2853 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-127\" not found" Jan 13 23:40:21.291683 kubelet[2853]: I0113 23:40:21.290880 2853 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 23:40:21.370406 kubelet[2853]: I0113 23:40:21.370064 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ded12cfe464594c79461c8d7570190f5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-127\" (UID: \"ded12cfe464594c79461c8d7570190f5\") " pod="kube-system/kube-controller-manager-ip-172-31-24-127" Jan 13 23:40:21.370406 kubelet[2853]: I0113 23:40:21.370126 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ded12cfe464594c79461c8d7570190f5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-127\" (UID: \"ded12cfe464594c79461c8d7570190f5\") " pod="kube-system/kube-controller-manager-ip-172-31-24-127" Jan 13 23:40:21.370406 kubelet[2853]: I0113 23:40:21.370167 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ded12cfe464594c79461c8d7570190f5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-127\" (UID: \"ded12cfe464594c79461c8d7570190f5\") " pod="kube-system/kube-controller-manager-ip-172-31-24-127" Jan 13 23:40:21.370406 kubelet[2853]: I0113 23:40:21.370205 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e21d41201db77406a57da4ece2aeb60-ca-certs\") pod \"kube-apiserver-ip-172-31-24-127\" (UID: \"2e21d41201db77406a57da4ece2aeb60\") " pod="kube-system/kube-apiserver-ip-172-31-24-127" Jan 13 23:40:21.370406 kubelet[2853]: I0113 23:40:21.370252 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e21d41201db77406a57da4ece2aeb60-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-127\" (UID: \"2e21d41201db77406a57da4ece2aeb60\") " pod="kube-system/kube-apiserver-ip-172-31-24-127" Jan 13 23:40:21.370734 kubelet[2853]: I0113 23:40:21.370297 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e21d41201db77406a57da4ece2aeb60-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-127\" (UID: \"2e21d41201db77406a57da4ece2aeb60\") " pod="kube-system/kube-apiserver-ip-172-31-24-127" Jan 13 23:40:21.370734 kubelet[2853]: I0113 23:40:21.370348 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ded12cfe464594c79461c8d7570190f5-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-127\" (UID: \"ded12cfe464594c79461c8d7570190f5\") " pod="kube-system/kube-controller-manager-ip-172-31-24-127" Jan 13 23:40:21.370734 kubelet[2853]: I0113 23:40:21.370390 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ded12cfe464594c79461c8d7570190f5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-127\" (UID: \"ded12cfe464594c79461c8d7570190f5\") " pod="kube-system/kube-controller-manager-ip-172-31-24-127" Jan 13 23:40:21.373109 kubelet[2853]: E0113 23:40:21.372869 2853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-127?timeout=10s\": dial tcp 172.31.24.127:6443: connect: connection refused" interval="400ms" Jan 13 23:40:21.383334 systemd[1]: Created slice kubepods-burstable-pod2e21d41201db77406a57da4ece2aeb60.slice - libcontainer container kubepods-burstable-pod2e21d41201db77406a57da4ece2aeb60.slice. Jan 13 23:40:21.392346 kubelet[2853]: I0113 23:40:21.392302 2853 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-127" Jan 13 23:40:21.393053 kubelet[2853]: E0113 23:40:21.393005 2853 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.127:6443/api/v1/nodes\": dial tcp 172.31.24.127:6443: connect: connection refused" node="ip-172-31-24-127" Jan 13 23:40:21.397923 kubelet[2853]: E0113 23:40:21.397439 2853 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-127\" not found" node="ip-172-31-24-127" Jan 13 23:40:21.404385 systemd[1]: Created slice kubepods-burstable-podded12cfe464594c79461c8d7570190f5.slice - libcontainer container kubepods-burstable-podded12cfe464594c79461c8d7570190f5.slice. Jan 13 23:40:21.420298 kubelet[2853]: E0113 23:40:21.420261 2853 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-127\" not found" node="ip-172-31-24-127" Jan 13 23:40:21.426055 systemd[1]: Created slice kubepods-burstable-pod4d4c03a3aef05d759679af1d157a1138.slice - libcontainer container kubepods-burstable-pod4d4c03a3aef05d759679af1d157a1138.slice. Jan 13 23:40:21.430604 kubelet[2853]: E0113 23:40:21.430240 2853 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-127\" not found" node="ip-172-31-24-127" Jan 13 23:40:21.471677 kubelet[2853]: I0113 23:40:21.471621 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4d4c03a3aef05d759679af1d157a1138-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-127\" (UID: \"4d4c03a3aef05d759679af1d157a1138\") " pod="kube-system/kube-scheduler-ip-172-31-24-127" Jan 13 23:40:21.595541 kubelet[2853]: I0113 23:40:21.595477 2853 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-127" Jan 13 23:40:21.596237 kubelet[2853]: E0113 23:40:21.596176 2853 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.127:6443/api/v1/nodes\": dial tcp 172.31.24.127:6443: connect: connection refused" node="ip-172-31-24-127" Jan 13 23:40:21.701632 update_engine[1858]: I20260113 23:40:21.701414 1858 update_attempter.cc:509] Updating boot flags... Jan 13 23:40:21.705575 containerd[1894]: time="2026-01-13T23:40:21.705306939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-127,Uid:2e21d41201db77406a57da4ece2aeb60,Namespace:kube-system,Attempt:0,}" Jan 13 23:40:21.729258 containerd[1894]: time="2026-01-13T23:40:21.729206215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-127,Uid:ded12cfe464594c79461c8d7570190f5,Namespace:kube-system,Attempt:0,}" Jan 13 23:40:21.736542 containerd[1894]: time="2026-01-13T23:40:21.736328714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-127,Uid:4d4c03a3aef05d759679af1d157a1138,Namespace:kube-system,Attempt:0,}" Jan 13 23:40:21.777601 kubelet[2853]: E0113 23:40:21.776937 2853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-127?timeout=10s\": dial tcp 172.31.24.127:6443: connect: connection refused" interval="800ms" Jan 13 23:40:22.002707 kubelet[2853]: I0113 23:40:22.002650 2853 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-127" Jan 13 23:40:22.004867 kubelet[2853]: E0113 23:40:22.004788 2853 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.127:6443/api/v1/nodes\": dial tcp 172.31.24.127:6443: connect: connection refused" node="ip-172-31-24-127" Jan 13 23:40:22.208876 kubelet[2853]: E0113 23:40:22.208671 2853 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 13 23:40:22.323824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1760892627.mount: Deactivated successfully. Jan 13 23:40:22.366926 containerd[1894]: time="2026-01-13T23:40:22.360484016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 23:40:22.370789 containerd[1894]: time="2026-01-13T23:40:22.370715289Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 13 23:40:22.379931 containerd[1894]: time="2026-01-13T23:40:22.377688074Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 23:40:22.387268 containerd[1894]: time="2026-01-13T23:40:22.387212626Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 23:40:22.389675 kubelet[2853]: E0113 23:40:22.389612 2853 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.24.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 13 23:40:22.394592 containerd[1894]: time="2026-01-13T23:40:22.394537090Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 23:40:22.404875 containerd[1894]: time="2026-01-13T23:40:22.404113652Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 13 23:40:22.406934 containerd[1894]: time="2026-01-13T23:40:22.406449312Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 13 23:40:22.412090 containerd[1894]: time="2026-01-13T23:40:22.412035811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 23:40:22.422930 containerd[1894]: time="2026-01-13T23:40:22.420823951Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 709.387242ms" Jan 13 23:40:22.453892 containerd[1894]: time="2026-01-13T23:40:22.453818493Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 717.376622ms" Jan 13 23:40:22.463471 containerd[1894]: time="2026-01-13T23:40:22.462153934Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 720.652387ms" Jan 13 23:40:22.554511 kubelet[2853]: E0113 23:40:22.554438 2853 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.24.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-127&limit=500&resourceVersion=0\": dial tcp 172.31.24.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 13 23:40:22.582646 kubelet[2853]: E0113 23:40:22.580978 2853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-127?timeout=10s\": dial tcp 172.31.24.127:6443: connect: connection refused" interval="1.6s" Jan 13 23:40:22.584644 kubelet[2853]: E0113 23:40:22.584192 2853 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 13 23:40:22.593516 containerd[1894]: time="2026-01-13T23:40:22.593005612Z" level=info msg="connecting to shim 922e3e133bfdcad75245670644c22c5eec6d3e1c6c45a3da5625470cfb7b40c6" address="unix:///run/containerd/s/f9cf9a8aeadc1563fd82bd8bc07c59f0d781f0052d65a8c67ceacdf10a7d2f80" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:40:22.649051 containerd[1894]: time="2026-01-13T23:40:22.645465650Z" level=info msg="connecting to shim d3e6749d351c6f6f0c9f1c4f0066058c785a990b3c04530106740e56c6a178be" address="unix:///run/containerd/s/d95bd25bed52b97ad7289c44747af69b7eacf264172bebdbdcc4f085cb66f266" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:40:22.654616 containerd[1894]: time="2026-01-13T23:40:22.653796181Z" level=info msg="connecting to shim 47f2db06e8680888abd15ae880e7305bcc12dcf346791b22493d6def59e646a0" address="unix:///run/containerd/s/01396ed2c0f5fa05a475d30dd5edd9d8ffb06d3298da66d0385e5bd8a33274dd" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:40:22.792513 systemd[1]: Started cri-containerd-922e3e133bfdcad75245670644c22c5eec6d3e1c6c45a3da5625470cfb7b40c6.scope - libcontainer container 922e3e133bfdcad75245670644c22c5eec6d3e1c6c45a3da5625470cfb7b40c6. Jan 13 23:40:22.810419 kubelet[2853]: I0113 23:40:22.810367 2853 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-127" Jan 13 23:40:22.810924 kubelet[2853]: E0113 23:40:22.810856 2853 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.127:6443/api/v1/nodes\": dial tcp 172.31.24.127:6443: connect: connection refused" node="ip-172-31-24-127" Jan 13 23:40:22.817443 systemd[1]: Started cri-containerd-d3e6749d351c6f6f0c9f1c4f0066058c785a990b3c04530106740e56c6a178be.scope - libcontainer container d3e6749d351c6f6f0c9f1c4f0066058c785a990b3c04530106740e56c6a178be. Jan 13 23:40:22.862355 systemd[1]: Started cri-containerd-47f2db06e8680888abd15ae880e7305bcc12dcf346791b22493d6def59e646a0.scope - libcontainer container 47f2db06e8680888abd15ae880e7305bcc12dcf346791b22493d6def59e646a0. Jan 13 23:40:23.069000 audit: BPF prog-id=89 op=LOAD Jan 13 23:40:23.070000 audit: BPF prog-id=90 op=LOAD Jan 13 23:40:23.070000 audit[3156]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3094 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932326533653133336266646361643735323435363730363434633232 Jan 13 23:40:23.071000 audit: BPF prog-id=90 op=UNLOAD Jan 13 23:40:23.071000 audit[3156]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3094 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932326533653133336266646361643735323435363730363434633232 Jan 13 23:40:23.071000 audit: BPF prog-id=91 op=LOAD Jan 13 23:40:23.071000 audit[3156]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3094 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932326533653133336266646361643735323435363730363434633232 Jan 13 23:40:23.072000 audit: BPF prog-id=92 op=LOAD Jan 13 23:40:23.072000 audit[3156]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3094 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932326533653133336266646361643735323435363730363434633232 Jan 13 23:40:23.072000 audit: BPF prog-id=92 op=UNLOAD Jan 13 23:40:23.072000 audit[3156]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3094 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932326533653133336266646361643735323435363730363434633232 Jan 13 23:40:23.072000 audit: BPF prog-id=91 op=UNLOAD Jan 13 23:40:23.072000 audit[3156]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3094 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932326533653133336266646361643735323435363730363434633232 Jan 13 23:40:23.073000 audit: BPF prog-id=93 op=LOAD Jan 13 23:40:23.073000 audit[3156]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3094 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932326533653133336266646361643735323435363730363434633232 Jan 13 23:40:23.073000 audit: BPF prog-id=94 op=LOAD Jan 13 23:40:23.077000 audit: BPF prog-id=95 op=LOAD Jan 13 23:40:23.077000 audit[3184]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3131 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.078000 audit: BPF prog-id=96 op=LOAD Jan 13 23:40:23.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433653637343964333531633666366630633966316334663030363630 Jan 13 23:40:23.080000 audit: BPF prog-id=95 op=UNLOAD Jan 13 23:40:23.080000 audit[3184]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3131 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433653637343964333531633666366630633966316334663030363630 Jan 13 23:40:23.084000 audit: BPF prog-id=97 op=LOAD Jan 13 23:40:23.084000 audit[3212]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3134 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.084000 audit: BPF prog-id=98 op=LOAD Jan 13 23:40:23.084000 audit[3184]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3131 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433653637343964333531633666366630633966316334663030363630 Jan 13 23:40:23.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437663264623036653836383038383861626431356165383830653733 Jan 13 23:40:23.086000 audit: BPF prog-id=97 op=UNLOAD Jan 13 23:40:23.086000 audit[3212]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3134 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437663264623036653836383038383861626431356165383830653733 Jan 13 23:40:23.086000 audit: BPF prog-id=99 op=LOAD Jan 13 23:40:23.086000 audit[3212]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3134 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437663264623036653836383038383861626431356165383830653733 Jan 13 23:40:23.088000 audit: BPF prog-id=100 op=LOAD Jan 13 23:40:23.088000 audit[3184]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3131 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433653637343964333531633666366630633966316334663030363630 Jan 13 23:40:23.088000 audit: BPF prog-id=100 op=UNLOAD Jan 13 23:40:23.088000 audit[3184]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3131 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433653637343964333531633666366630633966316334663030363630 Jan 13 23:40:23.088000 audit: BPF prog-id=98 op=UNLOAD Jan 13 23:40:23.088000 audit[3184]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3131 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433653637343964333531633666366630633966316334663030363630 Jan 13 23:40:23.088000 audit: BPF prog-id=101 op=LOAD Jan 13 23:40:23.088000 audit[3212]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3134 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437663264623036653836383038383861626431356165383830653733 Jan 13 23:40:23.089000 audit: BPF prog-id=101 op=UNLOAD Jan 13 23:40:23.089000 audit[3212]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3134 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437663264623036653836383038383861626431356165383830653733 Jan 13 23:40:23.089000 audit: BPF prog-id=99 op=UNLOAD Jan 13 23:40:23.089000 audit[3212]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3134 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.088000 audit: BPF prog-id=102 op=LOAD Jan 13 23:40:23.088000 audit[3184]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3131 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433653637343964333531633666366630633966316334663030363630 Jan 13 23:40:23.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437663264623036653836383038383861626431356165383830653733 Jan 13 23:40:23.089000 audit: BPF prog-id=103 op=LOAD Jan 13 23:40:23.089000 audit[3212]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3134 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437663264623036653836383038383861626431356165383830653733 Jan 13 23:40:23.210952 containerd[1894]: time="2026-01-13T23:40:23.208695052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-127,Uid:2e21d41201db77406a57da4ece2aeb60,Namespace:kube-system,Attempt:0,} returns sandbox id \"922e3e133bfdcad75245670644c22c5eec6d3e1c6c45a3da5625470cfb7b40c6\"" Jan 13 23:40:23.220482 containerd[1894]: time="2026-01-13T23:40:23.220349588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-127,Uid:4d4c03a3aef05d759679af1d157a1138,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3e6749d351c6f6f0c9f1c4f0066058c785a990b3c04530106740e56c6a178be\"" Jan 13 23:40:23.229010 containerd[1894]: time="2026-01-13T23:40:23.228857736Z" level=info msg="CreateContainer within sandbox \"922e3e133bfdcad75245670644c22c5eec6d3e1c6c45a3da5625470cfb7b40c6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 23:40:23.232509 containerd[1894]: time="2026-01-13T23:40:23.232452645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-127,Uid:ded12cfe464594c79461c8d7570190f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"47f2db06e8680888abd15ae880e7305bcc12dcf346791b22493d6def59e646a0\"" Jan 13 23:40:23.246970 containerd[1894]: time="2026-01-13T23:40:23.246860721Z" level=info msg="Container 8741c98af9b732aca9901f0af2dc3683e3499c8dd40515c6135856b4346cfd86: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:40:23.251093 containerd[1894]: time="2026-01-13T23:40:23.250918569Z" level=info msg="CreateContainer within sandbox \"d3e6749d351c6f6f0c9f1c4f0066058c785a990b3c04530106740e56c6a178be\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 23:40:23.260285 containerd[1894]: time="2026-01-13T23:40:23.260227193Z" level=info msg="CreateContainer within sandbox \"47f2db06e8680888abd15ae880e7305bcc12dcf346791b22493d6def59e646a0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 23:40:23.265725 containerd[1894]: time="2026-01-13T23:40:23.265633039Z" level=info msg="CreateContainer within sandbox \"922e3e133bfdcad75245670644c22c5eec6d3e1c6c45a3da5625470cfb7b40c6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8741c98af9b732aca9901f0af2dc3683e3499c8dd40515c6135856b4346cfd86\"" Jan 13 23:40:23.268390 containerd[1894]: time="2026-01-13T23:40:23.267302701Z" level=info msg="StartContainer for \"8741c98af9b732aca9901f0af2dc3683e3499c8dd40515c6135856b4346cfd86\"" Jan 13 23:40:23.271781 kubelet[2853]: E0113 23:40:23.271741 2853 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.24.127:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.127:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 13 23:40:23.274459 containerd[1894]: time="2026-01-13T23:40:23.274273565Z" level=info msg="connecting to shim 8741c98af9b732aca9901f0af2dc3683e3499c8dd40515c6135856b4346cfd86" address="unix:///run/containerd/s/f9cf9a8aeadc1563fd82bd8bc07c59f0d781f0052d65a8c67ceacdf10a7d2f80" protocol=ttrpc version=3 Jan 13 23:40:23.286822 containerd[1894]: time="2026-01-13T23:40:23.286744473Z" level=info msg="Container f858271379e011f5ab580d948d5761eef57888f29145b12fe8a46c3203a691ec: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:40:23.299922 containerd[1894]: time="2026-01-13T23:40:23.297792371Z" level=info msg="Container dcb94e6ab5e4627d550d78bdb9d6bcfd100abd9d6d366a99dd30a6c426ca7b4a: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:40:23.313067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2881647938.mount: Deactivated successfully. Jan 13 23:40:23.324880 containerd[1894]: time="2026-01-13T23:40:23.324634089Z" level=info msg="CreateContainer within sandbox \"d3e6749d351c6f6f0c9f1c4f0066058c785a990b3c04530106740e56c6a178be\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f858271379e011f5ab580d948d5761eef57888f29145b12fe8a46c3203a691ec\"" Jan 13 23:40:23.326626 containerd[1894]: time="2026-01-13T23:40:23.326559959Z" level=info msg="StartContainer for \"f858271379e011f5ab580d948d5761eef57888f29145b12fe8a46c3203a691ec\"" Jan 13 23:40:23.332283 containerd[1894]: time="2026-01-13T23:40:23.332205337Z" level=info msg="connecting to shim f858271379e011f5ab580d948d5761eef57888f29145b12fe8a46c3203a691ec" address="unix:///run/containerd/s/d95bd25bed52b97ad7289c44747af69b7eacf264172bebdbdcc4f085cb66f266" protocol=ttrpc version=3 Jan 13 23:40:23.347144 containerd[1894]: time="2026-01-13T23:40:23.347050528Z" level=info msg="CreateContainer within sandbox \"47f2db06e8680888abd15ae880e7305bcc12dcf346791b22493d6def59e646a0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dcb94e6ab5e4627d550d78bdb9d6bcfd100abd9d6d366a99dd30a6c426ca7b4a\"" Jan 13 23:40:23.349695 containerd[1894]: time="2026-01-13T23:40:23.349591454Z" level=info msg="StartContainer for \"dcb94e6ab5e4627d550d78bdb9d6bcfd100abd9d6d366a99dd30a6c426ca7b4a\"" Jan 13 23:40:23.358104 containerd[1894]: time="2026-01-13T23:40:23.356561153Z" level=info msg="connecting to shim dcb94e6ab5e4627d550d78bdb9d6bcfd100abd9d6d366a99dd30a6c426ca7b4a" address="unix:///run/containerd/s/01396ed2c0f5fa05a475d30dd5edd9d8ffb06d3298da66d0385e5bd8a33274dd" protocol=ttrpc version=3 Jan 13 23:40:23.366274 systemd[1]: Started cri-containerd-8741c98af9b732aca9901f0af2dc3683e3499c8dd40515c6135856b4346cfd86.scope - libcontainer container 8741c98af9b732aca9901f0af2dc3683e3499c8dd40515c6135856b4346cfd86. Jan 13 23:40:23.417000 audit: BPF prog-id=104 op=LOAD Jan 13 23:40:23.419000 audit: BPF prog-id=105 op=LOAD Jan 13 23:40:23.419000 audit[3304]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=3094 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837343163393861663962373332616361393930316630616632646333 Jan 13 23:40:23.419000 audit: BPF prog-id=105 op=UNLOAD Jan 13 23:40:23.419000 audit[3304]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3094 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837343163393861663962373332616361393930316630616632646333 Jan 13 23:40:23.420000 audit: BPF prog-id=106 op=LOAD Jan 13 23:40:23.420000 audit[3304]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=3094 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837343163393861663962373332616361393930316630616632646333 Jan 13 23:40:23.420000 audit: BPF prog-id=107 op=LOAD Jan 13 23:40:23.420000 audit[3304]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=3094 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837343163393861663962373332616361393930316630616632646333 Jan 13 23:40:23.420000 audit: BPF prog-id=107 op=UNLOAD Jan 13 23:40:23.420000 audit[3304]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3094 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837343163393861663962373332616361393930316630616632646333 Jan 13 23:40:23.420000 audit: BPF prog-id=106 op=UNLOAD Jan 13 23:40:23.420000 audit[3304]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3094 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837343163393861663962373332616361393930316630616632646333 Jan 13 23:40:23.420000 audit: BPF prog-id=108 op=LOAD Jan 13 23:40:23.420000 audit[3304]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=3094 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837343163393861663962373332616361393930316630616632646333 Jan 13 23:40:23.435241 systemd[1]: Started cri-containerd-dcb94e6ab5e4627d550d78bdb9d6bcfd100abd9d6d366a99dd30a6c426ca7b4a.scope - libcontainer container dcb94e6ab5e4627d550d78bdb9d6bcfd100abd9d6d366a99dd30a6c426ca7b4a. Jan 13 23:40:23.439740 systemd[1]: Started cri-containerd-f858271379e011f5ab580d948d5761eef57888f29145b12fe8a46c3203a691ec.scope - libcontainer container f858271379e011f5ab580d948d5761eef57888f29145b12fe8a46c3203a691ec. Jan 13 23:40:23.472000 audit: BPF prog-id=109 op=LOAD Jan 13 23:40:23.476000 audit: BPF prog-id=110 op=LOAD Jan 13 23:40:23.476000 audit[3316]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3131 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638353832373133373965303131663561623538306439343864353736 Jan 13 23:40:23.477000 audit: BPF prog-id=110 op=UNLOAD Jan 13 23:40:23.477000 audit[3316]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3131 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.477000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638353832373133373965303131663561623538306439343864353736 Jan 13 23:40:23.478000 audit: BPF prog-id=111 op=LOAD Jan 13 23:40:23.478000 audit[3316]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3131 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638353832373133373965303131663561623538306439343864353736 Jan 13 23:40:23.481000 audit: BPF prog-id=112 op=LOAD Jan 13 23:40:23.481000 audit[3316]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3131 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638353832373133373965303131663561623538306439343864353736 Jan 13 23:40:23.482000 audit: BPF prog-id=112 op=UNLOAD Jan 13 23:40:23.482000 audit[3316]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3131 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638353832373133373965303131663561623538306439343864353736 Jan 13 23:40:23.482000 audit: BPF prog-id=111 op=UNLOAD Jan 13 23:40:23.482000 audit[3316]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3131 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638353832373133373965303131663561623538306439343864353736 Jan 13 23:40:23.482000 audit: BPF prog-id=113 op=LOAD Jan 13 23:40:23.482000 audit[3316]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3131 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638353832373133373965303131663561623538306439343864353736 Jan 13 23:40:23.519000 audit: BPF prog-id=114 op=LOAD Jan 13 23:40:23.524389 containerd[1894]: time="2026-01-13T23:40:23.524138176Z" level=info msg="StartContainer for \"8741c98af9b732aca9901f0af2dc3683e3499c8dd40515c6135856b4346cfd86\" returns successfully" Jan 13 23:40:23.524000 audit: BPF prog-id=115 op=LOAD Jan 13 23:40:23.524000 audit[3317]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3134 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623934653661623565343632376435353064373862646239643662 Jan 13 23:40:23.525000 audit: BPF prog-id=115 op=UNLOAD Jan 13 23:40:23.525000 audit[3317]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3134 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623934653661623565343632376435353064373862646239643662 Jan 13 23:40:23.528000 audit: BPF prog-id=116 op=LOAD Jan 13 23:40:23.528000 audit[3317]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3134 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623934653661623565343632376435353064373862646239643662 Jan 13 23:40:23.529000 audit: BPF prog-id=117 op=LOAD Jan 13 23:40:23.529000 audit[3317]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3134 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623934653661623565343632376435353064373862646239643662 Jan 13 23:40:23.530000 audit: BPF prog-id=117 op=UNLOAD Jan 13 23:40:23.530000 audit[3317]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3134 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623934653661623565343632376435353064373862646239643662 Jan 13 23:40:23.531000 audit: BPF prog-id=116 op=UNLOAD Jan 13 23:40:23.531000 audit[3317]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3134 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623934653661623565343632376435353064373862646239643662 Jan 13 23:40:23.532000 audit: BPF prog-id=118 op=LOAD Jan 13 23:40:23.532000 audit[3317]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3134 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:23.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623934653661623565343632376435353064373862646239643662 Jan 13 23:40:23.604758 containerd[1894]: time="2026-01-13T23:40:23.604390483Z" level=info msg="StartContainer for \"f858271379e011f5ab580d948d5761eef57888f29145b12fe8a46c3203a691ec\" returns successfully" Jan 13 23:40:23.673369 containerd[1894]: time="2026-01-13T23:40:23.673303567Z" level=info msg="StartContainer for \"dcb94e6ab5e4627d550d78bdb9d6bcfd100abd9d6d366a99dd30a6c426ca7b4a\" returns successfully" Jan 13 23:40:24.300403 kubelet[2853]: E0113 23:40:24.300341 2853 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-127\" not found" node="ip-172-31-24-127" Jan 13 23:40:24.307483 kubelet[2853]: E0113 23:40:24.307425 2853 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-127\" not found" node="ip-172-31-24-127" Jan 13 23:40:24.320928 kubelet[2853]: E0113 23:40:24.320862 2853 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-127\" not found" node="ip-172-31-24-127" Jan 13 23:40:24.418075 kubelet[2853]: I0113 23:40:24.415818 2853 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-127" Jan 13 23:40:25.320498 kubelet[2853]: E0113 23:40:25.320445 2853 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-127\" not found" node="ip-172-31-24-127" Jan 13 23:40:25.323948 kubelet[2853]: E0113 23:40:25.323321 2853 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-127\" not found" node="ip-172-31-24-127" Jan 13 23:40:25.323948 kubelet[2853]: E0113 23:40:25.323480 2853 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-127\" not found" node="ip-172-31-24-127" Jan 13 23:40:26.324256 kubelet[2853]: E0113 23:40:26.324107 2853 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-127\" not found" node="ip-172-31-24-127" Jan 13 23:40:26.326720 kubelet[2853]: E0113 23:40:26.326614 2853 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-127\" not found" node="ip-172-31-24-127" Jan 13 23:40:27.327505 kubelet[2853]: E0113 23:40:27.327452 2853 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-127\" not found" node="ip-172-31-24-127" Jan 13 23:40:28.052616 kubelet[2853]: E0113 23:40:28.052559 2853 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-127\" not found" node="ip-172-31-24-127" Jan 13 23:40:28.127215 kubelet[2853]: I0113 23:40:28.127157 2853 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-127" Jan 13 23:40:28.127215 kubelet[2853]: E0113 23:40:28.127218 2853 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ip-172-31-24-127\": node \"ip-172-31-24-127\" not found" Jan 13 23:40:28.131122 kubelet[2853]: I0113 23:40:28.131071 2853 apiserver.go:52] "Watching apiserver" Jan 13 23:40:28.168784 kubelet[2853]: I0113 23:40:28.168727 2853 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-127" Jan 13 23:40:28.269480 kubelet[2853]: I0113 23:40:28.269397 2853 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 23:40:28.282313 kubelet[2853]: E0113 23:40:28.282255 2853 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-127\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-24-127" Jan 13 23:40:28.282313 kubelet[2853]: I0113 23:40:28.282305 2853 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-127" Jan 13 23:40:28.300179 kubelet[2853]: E0113 23:40:28.298228 2853 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-127\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-24-127" Jan 13 23:40:28.300179 kubelet[2853]: I0113 23:40:28.299826 2853 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-127" Jan 13 23:40:28.315927 kubelet[2853]: E0113 23:40:28.315267 2853 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-127\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-24-127" Jan 13 23:40:29.203432 kubelet[2853]: I0113 23:40:29.203359 2853 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-127" Jan 13 23:40:30.721304 systemd[1]: Reload requested from client PID 3408 ('systemctl') (unit session-8.scope)... Jan 13 23:40:30.721336 systemd[1]: Reloading... Jan 13 23:40:30.990949 zram_generator::config[3463]: No configuration found. Jan 13 23:40:31.303830 kubelet[2853]: I0113 23:40:31.302775 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-127" podStartSLOduration=2.302618131 podStartE2EDuration="2.302618131s" podCreationTimestamp="2026-01-13 23:40:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:40:31.302057331 +0000 UTC m=+11.689902057" watchObservedRunningTime="2026-01-13 23:40:31.302618131 +0000 UTC m=+11.690462845" Jan 13 23:40:31.581292 systemd[1]: Reloading finished in 859 ms. Jan 13 23:40:31.635268 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:40:31.652658 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 23:40:31.653398 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:40:31.659973 kernel: kauditd_printk_skb: 158 callbacks suppressed Jan 13 23:40:31.660057 kernel: audit: type=1131 audit(1768347631.652:406): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:40:31.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:40:31.653520 systemd[1]: kubelet.service: Consumed 2.464s CPU time, 123.6M memory peak. Jan 13 23:40:31.660411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:40:31.661000 audit: BPF prog-id=119 op=LOAD Jan 13 23:40:31.661000 audit: BPF prog-id=69 op=UNLOAD Jan 13 23:40:31.665997 kernel: audit: type=1334 audit(1768347631.661:407): prog-id=119 op=LOAD Jan 13 23:40:31.666132 kernel: audit: type=1334 audit(1768347631.661:408): prog-id=69 op=UNLOAD Jan 13 23:40:31.668000 audit: BPF prog-id=120 op=LOAD Jan 13 23:40:31.668000 audit: BPF prog-id=82 op=UNLOAD Jan 13 23:40:31.672932 kernel: audit: type=1334 audit(1768347631.668:409): prog-id=120 op=LOAD Jan 13 23:40:31.673059 kernel: audit: type=1334 audit(1768347631.668:410): prog-id=82 op=UNLOAD Jan 13 23:40:31.675000 audit: BPF prog-id=121 op=LOAD Jan 13 23:40:31.675000 audit: BPF prog-id=86 op=UNLOAD Jan 13 23:40:31.680340 kernel: audit: type=1334 audit(1768347631.675:411): prog-id=121 op=LOAD Jan 13 23:40:31.680498 kernel: audit: type=1334 audit(1768347631.675:412): prog-id=86 op=UNLOAD Jan 13 23:40:31.677000 audit: BPF prog-id=122 op=LOAD Jan 13 23:40:31.682302 kernel: audit: type=1334 audit(1768347631.677:413): prog-id=122 op=LOAD Jan 13 23:40:31.684005 kernel: audit: type=1334 audit(1768347631.677:414): prog-id=123 op=LOAD Jan 13 23:40:31.677000 audit: BPF prog-id=123 op=LOAD Jan 13 23:40:31.677000 audit: BPF prog-id=87 op=UNLOAD Jan 13 23:40:31.685933 kernel: audit: type=1334 audit(1768347631.677:415): prog-id=87 op=UNLOAD Jan 13 23:40:31.677000 audit: BPF prog-id=88 op=UNLOAD Jan 13 23:40:31.683000 audit: BPF prog-id=124 op=LOAD Jan 13 23:40:31.683000 audit: BPF prog-id=79 op=UNLOAD Jan 13 23:40:31.687000 audit: BPF prog-id=125 op=LOAD Jan 13 23:40:31.687000 audit: BPF prog-id=126 op=LOAD Jan 13 23:40:31.687000 audit: BPF prog-id=80 op=UNLOAD Jan 13 23:40:31.687000 audit: BPF prog-id=81 op=UNLOAD Jan 13 23:40:31.690000 audit: BPF prog-id=127 op=LOAD Jan 13 23:40:31.693000 audit: BPF prog-id=72 op=UNLOAD Jan 13 23:40:31.693000 audit: BPF prog-id=128 op=LOAD Jan 13 23:40:31.694000 audit: BPF prog-id=129 op=LOAD Jan 13 23:40:31.694000 audit: BPF prog-id=73 op=UNLOAD Jan 13 23:40:31.694000 audit: BPF prog-id=74 op=UNLOAD Jan 13 23:40:31.695000 audit: BPF prog-id=130 op=LOAD Jan 13 23:40:31.696000 audit: BPF prog-id=83 op=UNLOAD Jan 13 23:40:31.696000 audit: BPF prog-id=131 op=LOAD Jan 13 23:40:31.696000 audit: BPF prog-id=132 op=LOAD Jan 13 23:40:31.696000 audit: BPF prog-id=84 op=UNLOAD Jan 13 23:40:31.696000 audit: BPF prog-id=85 op=UNLOAD Jan 13 23:40:31.698000 audit: BPF prog-id=133 op=LOAD Jan 13 23:40:31.699000 audit: BPF prog-id=78 op=UNLOAD Jan 13 23:40:31.700000 audit: BPF prog-id=134 op=LOAD Jan 13 23:40:31.700000 audit: BPF prog-id=75 op=UNLOAD Jan 13 23:40:31.701000 audit: BPF prog-id=135 op=LOAD Jan 13 23:40:31.701000 audit: BPF prog-id=136 op=LOAD Jan 13 23:40:31.701000 audit: BPF prog-id=76 op=UNLOAD Jan 13 23:40:31.701000 audit: BPF prog-id=77 op=UNLOAD Jan 13 23:40:31.704000 audit: BPF prog-id=137 op=LOAD Jan 13 23:40:31.705000 audit: BPF prog-id=138 op=LOAD Jan 13 23:40:31.705000 audit: BPF prog-id=70 op=UNLOAD Jan 13 23:40:31.705000 audit: BPF prog-id=71 op=UNLOAD Jan 13 23:40:32.090890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:40:32.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:40:32.115769 (kubelet)[3516]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 23:40:32.221956 kubelet[3516]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 13 23:40:32.221956 kubelet[3516]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 23:40:32.221956 kubelet[3516]: I0113 23:40:32.220005 3516 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 23:40:32.242666 kubelet[3516]: I0113 23:40:32.242613 3516 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 13 23:40:32.242666 kubelet[3516]: I0113 23:40:32.242657 3516 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 23:40:32.242869 kubelet[3516]: I0113 23:40:32.242716 3516 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 13 23:40:32.242869 kubelet[3516]: I0113 23:40:32.242731 3516 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 13 23:40:32.244157 kubelet[3516]: I0113 23:40:32.243476 3516 server.go:956] "Client rotation is on, will bootstrap in background" Jan 13 23:40:32.246320 kubelet[3516]: I0113 23:40:32.246270 3516 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 13 23:40:32.250516 kubelet[3516]: I0113 23:40:32.250435 3516 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 23:40:32.259239 kubelet[3516]: I0113 23:40:32.259192 3516 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 13 23:40:32.265052 kubelet[3516]: I0113 23:40:32.264995 3516 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 13 23:40:32.265584 kubelet[3516]: I0113 23:40:32.265523 3516 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 23:40:32.265853 kubelet[3516]: I0113 23:40:32.265572 3516 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-127","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 23:40:32.265853 kubelet[3516]: I0113 23:40:32.265845 3516 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 23:40:32.266090 kubelet[3516]: I0113 23:40:32.265866 3516 container_manager_linux.go:306] "Creating device plugin manager" Jan 13 23:40:32.266090 kubelet[3516]: I0113 23:40:32.265925 3516 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 13 23:40:32.267973 kubelet[3516]: I0113 23:40:32.267923 3516 state_mem.go:36] "Initialized new in-memory state store" Jan 13 23:40:32.268221 kubelet[3516]: I0113 23:40:32.268179 3516 kubelet.go:475] "Attempting to sync node with API server" Jan 13 23:40:32.268221 kubelet[3516]: I0113 23:40:32.268214 3516 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 23:40:32.268340 kubelet[3516]: I0113 23:40:32.268252 3516 kubelet.go:387] "Adding apiserver pod source" Jan 13 23:40:32.268340 kubelet[3516]: I0113 23:40:32.268274 3516 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 23:40:32.274159 kubelet[3516]: I0113 23:40:32.274113 3516 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 13 23:40:32.276852 kubelet[3516]: I0113 23:40:32.276797 3516 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 13 23:40:32.276852 kubelet[3516]: I0113 23:40:32.276869 3516 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 13 23:40:32.300774 kubelet[3516]: I0113 23:40:32.300456 3516 server.go:1262] "Started kubelet" Jan 13 23:40:32.305468 kubelet[3516]: I0113 23:40:32.305404 3516 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 23:40:32.316507 kubelet[3516]: I0113 23:40:32.315096 3516 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 23:40:32.320494 kubelet[3516]: I0113 23:40:32.319639 3516 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 23:40:32.320494 kubelet[3516]: I0113 23:40:32.319974 3516 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 13 23:40:32.324955 kubelet[3516]: I0113 23:40:32.324868 3516 server.go:310] "Adding debug handlers to kubelet server" Jan 13 23:40:32.329785 kubelet[3516]: I0113 23:40:32.329657 3516 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 23:40:32.331689 kubelet[3516]: I0113 23:40:32.331125 3516 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 23:40:32.336794 kubelet[3516]: I0113 23:40:32.335677 3516 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 13 23:40:32.343145 kubelet[3516]: I0113 23:40:32.342060 3516 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 23:40:32.343145 kubelet[3516]: I0113 23:40:32.342314 3516 reconciler.go:29] "Reconciler: start to sync state" Jan 13 23:40:32.365803 kubelet[3516]: I0113 23:40:32.365582 3516 factory.go:223] Registration of the systemd container factory successfully Jan 13 23:40:32.365803 kubelet[3516]: I0113 23:40:32.365754 3516 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 23:40:32.372439 kubelet[3516]: E0113 23:40:32.372345 3516 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 23:40:32.374956 kubelet[3516]: I0113 23:40:32.373271 3516 factory.go:223] Registration of the containerd container factory successfully Jan 13 23:40:32.389221 kubelet[3516]: I0113 23:40:32.389159 3516 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 13 23:40:32.392017 kubelet[3516]: I0113 23:40:32.391977 3516 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 13 23:40:32.392198 kubelet[3516]: I0113 23:40:32.392180 3516 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 13 23:40:32.392311 kubelet[3516]: I0113 23:40:32.392294 3516 kubelet.go:2427] "Starting kubelet main sync loop" Jan 13 23:40:32.392476 kubelet[3516]: E0113 23:40:32.392439 3516 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 23:40:32.492633 kubelet[3516]: E0113 23:40:32.492593 3516 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 23:40:32.493967 kubelet[3516]: I0113 23:40:32.493937 3516 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 13 23:40:32.494400 kubelet[3516]: I0113 23:40:32.494309 3516 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 13 23:40:32.494690 kubelet[3516]: I0113 23:40:32.494599 3516 state_mem.go:36] "Initialized new in-memory state store" Jan 13 23:40:32.495478 kubelet[3516]: I0113 23:40:32.495353 3516 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 23:40:32.495478 kubelet[3516]: I0113 23:40:32.495415 3516 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 23:40:32.496398 kubelet[3516]: I0113 23:40:32.495692 3516 policy_none.go:49] "None policy: Start" Jan 13 23:40:32.496398 kubelet[3516]: I0113 23:40:32.495893 3516 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 13 23:40:32.496398 kubelet[3516]: I0113 23:40:32.495955 3516 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 13 23:40:32.497926 kubelet[3516]: I0113 23:40:32.497232 3516 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 13 23:40:32.497926 kubelet[3516]: I0113 23:40:32.497268 3516 policy_none.go:47] "Start" Jan 13 23:40:32.506459 kubelet[3516]: E0113 23:40:32.506413 3516 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 13 23:40:32.506755 kubelet[3516]: I0113 23:40:32.506719 3516 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 23:40:32.506833 kubelet[3516]: I0113 23:40:32.506750 3516 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 23:40:32.508799 kubelet[3516]: I0113 23:40:32.508533 3516 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 23:40:32.509657 kubelet[3516]: E0113 23:40:32.509407 3516 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 13 23:40:32.618443 kubelet[3516]: I0113 23:40:32.618272 3516 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-127" Jan 13 23:40:32.635985 kubelet[3516]: I0113 23:40:32.634548 3516 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-24-127" Jan 13 23:40:32.637493 kubelet[3516]: I0113 23:40:32.637428 3516 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-127" Jan 13 23:40:32.694319 kubelet[3516]: I0113 23:40:32.694276 3516 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-127" Jan 13 23:40:32.696285 kubelet[3516]: I0113 23:40:32.696229 3516 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-127" Jan 13 23:40:32.697932 kubelet[3516]: I0113 23:40:32.696768 3516 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-127" Jan 13 23:40:32.707879 kubelet[3516]: E0113 23:40:32.707817 3516 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-127\" already exists" pod="kube-system/kube-scheduler-ip-172-31-24-127" Jan 13 23:40:32.744391 kubelet[3516]: I0113 23:40:32.744313 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ded12cfe464594c79461c8d7570190f5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-127\" (UID: \"ded12cfe464594c79461c8d7570190f5\") " pod="kube-system/kube-controller-manager-ip-172-31-24-127" Jan 13 23:40:32.744525 kubelet[3516]: I0113 23:40:32.744403 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e21d41201db77406a57da4ece2aeb60-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-127\" (UID: \"2e21d41201db77406a57da4ece2aeb60\") " pod="kube-system/kube-apiserver-ip-172-31-24-127" Jan 13 23:40:32.744525 kubelet[3516]: I0113 23:40:32.744471 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ded12cfe464594c79461c8d7570190f5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-127\" (UID: \"ded12cfe464594c79461c8d7570190f5\") " pod="kube-system/kube-controller-manager-ip-172-31-24-127" Jan 13 23:40:32.744653 kubelet[3516]: I0113 23:40:32.744557 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4d4c03a3aef05d759679af1d157a1138-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-127\" (UID: \"4d4c03a3aef05d759679af1d157a1138\") " pod="kube-system/kube-scheduler-ip-172-31-24-127" Jan 13 23:40:32.744653 kubelet[3516]: I0113 23:40:32.744619 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e21d41201db77406a57da4ece2aeb60-ca-certs\") pod \"kube-apiserver-ip-172-31-24-127\" (UID: \"2e21d41201db77406a57da4ece2aeb60\") " pod="kube-system/kube-apiserver-ip-172-31-24-127" Jan 13 23:40:32.744756 kubelet[3516]: I0113 23:40:32.744659 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e21d41201db77406a57da4ece2aeb60-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-127\" (UID: \"2e21d41201db77406a57da4ece2aeb60\") " pod="kube-system/kube-apiserver-ip-172-31-24-127" Jan 13 23:40:32.744756 kubelet[3516]: I0113 23:40:32.744720 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ded12cfe464594c79461c8d7570190f5-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-127\" (UID: \"ded12cfe464594c79461c8d7570190f5\") " pod="kube-system/kube-controller-manager-ip-172-31-24-127" Jan 13 23:40:32.745692 kubelet[3516]: I0113 23:40:32.744792 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ded12cfe464594c79461c8d7570190f5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-127\" (UID: \"ded12cfe464594c79461c8d7570190f5\") " pod="kube-system/kube-controller-manager-ip-172-31-24-127" Jan 13 23:40:32.745692 kubelet[3516]: I0113 23:40:32.745028 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ded12cfe464594c79461c8d7570190f5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-127\" (UID: \"ded12cfe464594c79461c8d7570190f5\") " pod="kube-system/kube-controller-manager-ip-172-31-24-127" Jan 13 23:40:33.269773 kubelet[3516]: I0113 23:40:33.269706 3516 apiserver.go:52] "Watching apiserver" Jan 13 23:40:33.342790 kubelet[3516]: I0113 23:40:33.342735 3516 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 23:40:33.444506 kubelet[3516]: I0113 23:40:33.444162 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-127" podStartSLOduration=1.4441395670000001 podStartE2EDuration="1.444139567s" podCreationTimestamp="2026-01-13 23:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:40:33.413682014 +0000 UTC m=+1.289708386" watchObservedRunningTime="2026-01-13 23:40:33.444139567 +0000 UTC m=+1.320165939" Jan 13 23:40:33.453352 kubelet[3516]: I0113 23:40:33.453298 3516 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-127" Jan 13 23:40:33.513031 kubelet[3516]: E0113 23:40:33.512946 3516 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-127\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-127" Jan 13 23:40:33.542793 kubelet[3516]: I0113 23:40:33.542611 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-127" podStartSLOduration=1.54258932 podStartE2EDuration="1.54258932s" podCreationTimestamp="2026-01-13 23:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:40:33.504266648 +0000 UTC m=+1.380293008" watchObservedRunningTime="2026-01-13 23:40:33.54258932 +0000 UTC m=+1.418615680" Jan 13 23:40:36.027783 kubelet[3516]: I0113 23:40:36.027700 3516 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 23:40:36.029654 containerd[1894]: time="2026-01-13T23:40:36.029557200Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 23:40:36.030876 kubelet[3516]: I0113 23:40:36.030533 3516 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 23:40:37.101731 systemd[1]: Created slice kubepods-besteffort-pod919226ad_61ee_4904_8ce7_4a9b17303e4f.slice - libcontainer container kubepods-besteffort-pod919226ad_61ee_4904_8ce7_4a9b17303e4f.slice. Jan 13 23:40:37.177104 kubelet[3516]: I0113 23:40:37.177040 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/919226ad-61ee-4904-8ce7-4a9b17303e4f-kube-proxy\") pod \"kube-proxy-bf9tw\" (UID: \"919226ad-61ee-4904-8ce7-4a9b17303e4f\") " pod="kube-system/kube-proxy-bf9tw" Jan 13 23:40:37.178454 kubelet[3516]: I0113 23:40:37.177129 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7zjz\" (UniqueName: \"kubernetes.io/projected/919226ad-61ee-4904-8ce7-4a9b17303e4f-kube-api-access-k7zjz\") pod \"kube-proxy-bf9tw\" (UID: \"919226ad-61ee-4904-8ce7-4a9b17303e4f\") " pod="kube-system/kube-proxy-bf9tw" Jan 13 23:40:37.178454 kubelet[3516]: I0113 23:40:37.177177 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/919226ad-61ee-4904-8ce7-4a9b17303e4f-xtables-lock\") pod \"kube-proxy-bf9tw\" (UID: \"919226ad-61ee-4904-8ce7-4a9b17303e4f\") " pod="kube-system/kube-proxy-bf9tw" Jan 13 23:40:37.178454 kubelet[3516]: I0113 23:40:37.177241 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/919226ad-61ee-4904-8ce7-4a9b17303e4f-lib-modules\") pod \"kube-proxy-bf9tw\" (UID: \"919226ad-61ee-4904-8ce7-4a9b17303e4f\") " pod="kube-system/kube-proxy-bf9tw" Jan 13 23:40:37.244823 systemd[1]: Created slice kubepods-besteffort-pod5487f077_2016_4030_8296_e4348b68c177.slice - libcontainer container kubepods-besteffort-pod5487f077_2016_4030_8296_e4348b68c177.slice. Jan 13 23:40:37.278470 kubelet[3516]: I0113 23:40:37.278393 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5487f077-2016-4030-8296-e4348b68c177-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-9nnqw\" (UID: \"5487f077-2016-4030-8296-e4348b68c177\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-9nnqw" Jan 13 23:40:37.278633 kubelet[3516]: I0113 23:40:37.278552 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j26ph\" (UniqueName: \"kubernetes.io/projected/5487f077-2016-4030-8296-e4348b68c177-kube-api-access-j26ph\") pod \"tigera-operator-65cdcdfd6d-9nnqw\" (UID: \"5487f077-2016-4030-8296-e4348b68c177\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-9nnqw" Jan 13 23:40:37.420395 containerd[1894]: time="2026-01-13T23:40:37.420198311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bf9tw,Uid:919226ad-61ee-4904-8ce7-4a9b17303e4f,Namespace:kube-system,Attempt:0,}" Jan 13 23:40:37.452288 containerd[1894]: time="2026-01-13T23:40:37.452229971Z" level=info msg="connecting to shim 197a838694660c4daf5986c655889cad30658e7b678d9c307ced9fd06bc2d884" address="unix:///run/containerd/s/542d9938b0f5dc5f1cf41d80283fb30dcd723795c6756f50ad8d5353dc874bbe" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:40:37.501299 systemd[1]: Started cri-containerd-197a838694660c4daf5986c655889cad30658e7b678d9c307ced9fd06bc2d884.scope - libcontainer container 197a838694660c4daf5986c655889cad30658e7b678d9c307ced9fd06bc2d884. Jan 13 23:40:37.530000 audit: BPF prog-id=139 op=LOAD Jan 13 23:40:37.532194 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 13 23:40:37.532272 kernel: audit: type=1334 audit(1768347637.530:448): prog-id=139 op=LOAD Jan 13 23:40:37.533000 audit: BPF prog-id=140 op=LOAD Jan 13 23:40:37.542398 kernel: audit: type=1334 audit(1768347637.533:449): prog-id=140 op=LOAD Jan 13 23:40:37.542595 kernel: audit: type=1300 audit(1768347637.533:449): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3575 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:37.542650 kernel: audit: type=1327 audit(1768347637.533:449): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139376138333836393436363063346461663539383663363535383839 Jan 13 23:40:37.533000 audit[3586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3575 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:37.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139376138333836393436363063346461663539383663363535383839 Jan 13 23:40:37.533000 audit: BPF prog-id=140 op=UNLOAD Jan 13 23:40:37.549992 kernel: audit: type=1334 audit(1768347637.533:450): prog-id=140 op=UNLOAD Jan 13 23:40:37.550115 kernel: audit: type=1300 audit(1768347637.533:450): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3575 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:37.533000 audit[3586]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3575 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:37.562532 kernel: audit: type=1327 audit(1768347637.533:450): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139376138333836393436363063346461663539383663363535383839 Jan 13 23:40:37.562654 kernel: audit: type=1334 audit(1768347637.533:451): prog-id=141 op=LOAD Jan 13 23:40:37.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139376138333836393436363063346461663539383663363535383839 Jan 13 23:40:37.533000 audit: BPF prog-id=141 op=LOAD Jan 13 23:40:37.567439 kernel: audit: type=1300 audit(1768347637.533:451): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3575 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:37.533000 audit[3586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3575 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:37.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139376138333836393436363063346461663539383663363535383839 Jan 13 23:40:37.535000 audit: BPF prog-id=142 op=LOAD Jan 13 23:40:37.535000 audit[3586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3575 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:37.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139376138333836393436363063346461663539383663363535383839 Jan 13 23:40:37.547000 audit: BPF prog-id=142 op=UNLOAD Jan 13 23:40:37.547000 audit[3586]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3575 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:37.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139376138333836393436363063346461663539383663363535383839 Jan 13 23:40:37.547000 audit: BPF prog-id=141 op=UNLOAD Jan 13 23:40:37.547000 audit[3586]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3575 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:37.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139376138333836393436363063346461663539383663363535383839 Jan 13 23:40:37.547000 audit: BPF prog-id=143 op=LOAD Jan 13 23:40:37.547000 audit[3586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3575 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:37.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139376138333836393436363063346461663539383663363535383839 Jan 13 23:40:37.579942 kernel: audit: type=1327 audit(1768347637.533:451): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139376138333836393436363063346461663539383663363535383839 Jan 13 23:40:37.580025 containerd[1894]: time="2026-01-13T23:40:37.578863137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-9nnqw,Uid:5487f077-2016-4030-8296-e4348b68c177,Namespace:tigera-operator,Attempt:0,}" Jan 13 23:40:37.647029 containerd[1894]: time="2026-01-13T23:40:37.646787466Z" level=info msg="connecting to shim bc7fd7e37b387e2d45d26f51cbb4118434a0c7d021446dd300ca7f6fb4bb93f1" address="unix:///run/containerd/s/04cae5463c0309a3f4188c39042ee08cf5bf6db9ce7b9f3b51ebb568eca402ea" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:40:37.652599 containerd[1894]: time="2026-01-13T23:40:37.652505552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bf9tw,Uid:919226ad-61ee-4904-8ce7-4a9b17303e4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"197a838694660c4daf5986c655889cad30658e7b678d9c307ced9fd06bc2d884\"" Jan 13 23:40:37.663716 containerd[1894]: time="2026-01-13T23:40:37.663374044Z" level=info msg="CreateContainer within sandbox \"197a838694660c4daf5986c655889cad30658e7b678d9c307ced9fd06bc2d884\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 23:40:37.686076 containerd[1894]: time="2026-01-13T23:40:37.684801632Z" level=info msg="Container effeb44799ee488c51fc0fe4c95b0596a38e5ff1061180177dfc7d5c27268642: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:40:37.705269 systemd[1]: Started cri-containerd-bc7fd7e37b387e2d45d26f51cbb4118434a0c7d021446dd300ca7f6fb4bb93f1.scope - libcontainer container bc7fd7e37b387e2d45d26f51cbb4118434a0c7d021446dd300ca7f6fb4bb93f1. Jan 13 23:40:37.707894 containerd[1894]: time="2026-01-13T23:40:37.707814925Z" level=info msg="CreateContainer within sandbox \"197a838694660c4daf5986c655889cad30658e7b678d9c307ced9fd06bc2d884\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"effeb44799ee488c51fc0fe4c95b0596a38e5ff1061180177dfc7d5c27268642\"" Jan 13 23:40:37.708746 containerd[1894]: time="2026-01-13T23:40:37.708703033Z" level=info msg="StartContainer for \"effeb44799ee488c51fc0fe4c95b0596a38e5ff1061180177dfc7d5c27268642\"" Jan 13 23:40:37.716830 containerd[1894]: time="2026-01-13T23:40:37.716697096Z" level=info msg="connecting to shim effeb44799ee488c51fc0fe4c95b0596a38e5ff1061180177dfc7d5c27268642" address="unix:///run/containerd/s/542d9938b0f5dc5f1cf41d80283fb30dcd723795c6756f50ad8d5353dc874bbe" protocol=ttrpc version=3 Jan 13 23:40:37.742000 audit: BPF prog-id=144 op=LOAD Jan 13 23:40:37.745000 audit: BPF prog-id=145 op=LOAD Jan 13 23:40:37.745000 audit[3632]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=3621 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:37.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263376664376533376233383765326434356432366635316362623431 Jan 13 23:40:37.745000 audit: BPF prog-id=145 op=UNLOAD Jan 13 23:40:37.745000 audit[3632]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3621 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:37.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263376664376533376233383765326434356432366635316362623431 Jan 13 23:40:37.746000 audit: BPF prog-id=146 op=LOAD Jan 13 23:40:37.746000 audit[3632]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=3621 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:37.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263376664376533376233383765326434356432366635316362623431 Jan 13 23:40:37.746000 audit: BPF prog-id=147 op=LOAD Jan 13 23:40:37.746000 audit[3632]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=3621 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:37.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263376664376533376233383765326434356432366635316362623431 Jan 13 23:40:37.747000 audit: BPF prog-id=147 op=UNLOAD Jan 13 23:40:37.747000 audit[3632]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3621 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:37.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263376664376533376233383765326434356432366635316362623431 Jan 13 23:40:37.747000 audit: BPF prog-id=146 op=UNLOAD Jan 13 23:40:37.747000 audit[3632]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3621 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:37.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263376664376533376233383765326434356432366635316362623431 Jan 13 23:40:37.747000 audit: BPF prog-id=148 op=LOAD Jan 13 23:40:37.747000 audit[3632]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=3621 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:37.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263376664376533376233383765326434356432366635316362623431 Jan 13 23:40:37.768257 systemd[1]: Started cri-containerd-effeb44799ee488c51fc0fe4c95b0596a38e5ff1061180177dfc7d5c27268642.scope - libcontainer container effeb44799ee488c51fc0fe4c95b0596a38e5ff1061180177dfc7d5c27268642. Jan 13 23:40:37.822530 containerd[1894]: time="2026-01-13T23:40:37.822474219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-9nnqw,Uid:5487f077-2016-4030-8296-e4348b68c177,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bc7fd7e37b387e2d45d26f51cbb4118434a0c7d021446dd300ca7f6fb4bb93f1\"" Jan 13 23:40:37.829471 containerd[1894]: time="2026-01-13T23:40:37.829398716Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 13 23:40:37.863000 audit: BPF prog-id=149 op=LOAD Jan 13 23:40:37.863000 audit[3646]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3575 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:37.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566666562343437393965653438386335316663306665346339356230 Jan 13 23:40:37.864000 audit: BPF prog-id=150 op=LOAD Jan 13 23:40:37.864000 audit[3646]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3575 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:37.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566666562343437393965653438386335316663306665346339356230 Jan 13 23:40:37.865000 audit: BPF prog-id=150 op=UNLOAD Jan 13 23:40:37.865000 audit[3646]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3575 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:37.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566666562343437393965653438386335316663306665346339356230 Jan 13 23:40:37.865000 audit: BPF prog-id=149 op=UNLOAD Jan 13 23:40:37.865000 audit[3646]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3575 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:37.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566666562343437393965653438386335316663306665346339356230 Jan 13 23:40:37.865000 audit: BPF prog-id=151 op=LOAD Jan 13 23:40:37.865000 audit[3646]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3575 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:37.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566666562343437393965653438386335316663306665346339356230 Jan 13 23:40:37.903858 containerd[1894]: time="2026-01-13T23:40:37.903678458Z" level=info msg="StartContainer for \"effeb44799ee488c51fc0fe4c95b0596a38e5ff1061180177dfc7d5c27268642\" returns successfully" Jan 13 23:40:38.246000 audit[3720]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3720 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:38.246000 audit[3720]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd6268dc0 a2=0 a3=1 items=0 ppid=3663 pid=3720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.246000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 13 23:40:38.249000 audit[3721]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3721 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:38.249000 audit[3721]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe7749b30 a2=0 a3=1 items=0 ppid=3663 pid=3721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.249000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 13 23:40:38.252000 audit[3722]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=3722 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:38.252000 audit[3722]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff4d5af60 a2=0 a3=1 items=0 ppid=3663 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.252000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 13 23:40:38.263000 audit[3723]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=3723 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.263000 audit[3723]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd235e260 a2=0 a3=1 items=0 ppid=3663 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.263000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 13 23:40:38.269000 audit[3727]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3727 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.269000 audit[3727]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc4aac350 a2=0 a3=1 items=0 ppid=3663 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.269000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 13 23:40:38.272000 audit[3728]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3728 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.272000 audit[3728]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff3fb7a90 a2=0 a3=1 items=0 ppid=3663 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.272000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 13 23:40:38.366000 audit[3729]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3729 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:38.366000 audit[3729]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe5572970 a2=0 a3=1 items=0 ppid=3663 pid=3729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.366000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 13 23:40:38.374000 audit[3731]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3731 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:38.374000 audit[3731]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff1a24c90 a2=0 a3=1 items=0 ppid=3663 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.374000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Jan 13 23:40:38.385000 audit[3734]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3734 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:38.385000 audit[3734]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff2847de0 a2=0 a3=1 items=0 ppid=3663 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.385000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 13 23:40:38.388000 audit[3735]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3735 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:38.388000 audit[3735]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc61d7740 a2=0 a3=1 items=0 ppid=3663 pid=3735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.388000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 13 23:40:38.399000 audit[3737]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3737 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:38.399000 audit[3737]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff8e29d40 a2=0 a3=1 items=0 ppid=3663 pid=3737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.399000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 13 23:40:38.402000 audit[3738]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3738 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:38.402000 audit[3738]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe54822a0 a2=0 a3=1 items=0 ppid=3663 pid=3738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.402000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 13 23:40:38.409000 audit[3740]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3740 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:38.409000 audit[3740]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe341dfc0 a2=0 a3=1 items=0 ppid=3663 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.409000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 13 23:40:38.418000 audit[3743]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3743 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:38.418000 audit[3743]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd3991d00 a2=0 a3=1 items=0 ppid=3663 pid=3743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.418000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 13 23:40:38.421000 audit[3744]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3744 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:38.421000 audit[3744]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcb9dd9e0 a2=0 a3=1 items=0 ppid=3663 pid=3744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.421000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 13 23:40:38.426000 audit[3746]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3746 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:38.426000 audit[3746]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc7e542a0 a2=0 a3=1 items=0 ppid=3663 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.426000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 13 23:40:38.428000 audit[3747]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3747 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:38.428000 audit[3747]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff2e48320 a2=0 a3=1 items=0 ppid=3663 pid=3747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.428000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 13 23:40:38.434000 audit[3749]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3749 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:38.434000 audit[3749]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcffa6ff0 a2=0 a3=1 items=0 ppid=3663 pid=3749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.434000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Jan 13 23:40:38.442000 audit[3752]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3752 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:38.442000 audit[3752]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff2c21880 a2=0 a3=1 items=0 ppid=3663 pid=3752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.442000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 13 23:40:38.450000 audit[3755]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3755 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:38.450000 audit[3755]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffd02bf60 a2=0 a3=1 items=0 ppid=3663 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.450000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 13 23:40:38.453000 audit[3756]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3756 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:38.453000 audit[3756]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd0052ff0 a2=0 a3=1 items=0 ppid=3663 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.453000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 13 23:40:38.458000 audit[3758]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3758 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:38.458000 audit[3758]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffde5dc6f0 a2=0 a3=1 items=0 ppid=3663 pid=3758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.458000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 13 23:40:38.477000 audit[3761]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3761 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:38.477000 audit[3761]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd403c2c0 a2=0 a3=1 items=0 ppid=3663 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.477000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 13 23:40:38.483000 audit[3762]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3762 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:38.483000 audit[3762]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffda4f61e0 a2=0 a3=1 items=0 ppid=3663 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.483000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 13 23:40:38.489000 audit[3764]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3764 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:40:38.489000 audit[3764]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffe1a52ad0 a2=0 a3=1 items=0 ppid=3663 pid=3764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.489000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 13 23:40:38.545000 audit[3770]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3770 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:40:38.545000 audit[3770]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffffbcef80 a2=0 a3=1 items=0 ppid=3663 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.545000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:40:38.558000 audit[3770]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3770 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:40:38.558000 audit[3770]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffffbcef80 a2=0 a3=1 items=0 ppid=3663 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.558000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:40:38.562000 audit[3775]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3775 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.562000 audit[3775]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff4f294b0 a2=0 a3=1 items=0 ppid=3663 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.562000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 13 23:40:38.568000 audit[3777]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3777 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.568000 audit[3777]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff8a5cc10 a2=0 a3=1 items=0 ppid=3663 pid=3777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.568000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 13 23:40:38.580000 audit[3780]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3780 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.580000 audit[3780]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffeaf29070 a2=0 a3=1 items=0 ppid=3663 pid=3780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.580000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Jan 13 23:40:38.583000 audit[3781]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3781 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.583000 audit[3781]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff420df40 a2=0 a3=1 items=0 ppid=3663 pid=3781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.583000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 13 23:40:38.589000 audit[3783]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3783 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.589000 audit[3783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffce21b20 a2=0 a3=1 items=0 ppid=3663 pid=3783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.589000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 13 23:40:38.592000 audit[3784]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3784 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.592000 audit[3784]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffe6886c0 a2=0 a3=1 items=0 ppid=3663 pid=3784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.592000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 13 23:40:38.598000 audit[3786]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3786 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.598000 audit[3786]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcb23f290 a2=0 a3=1 items=0 ppid=3663 pid=3786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.598000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 13 23:40:38.607000 audit[3789]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3789 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.607000 audit[3789]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=fffff60df9b0 a2=0 a3=1 items=0 ppid=3663 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.607000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 13 23:40:38.610000 audit[3790]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3790 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.610000 audit[3790]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffe27d940 a2=0 a3=1 items=0 ppid=3663 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.610000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 13 23:40:38.615000 audit[3792]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3792 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.615000 audit[3792]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff0c95660 a2=0 a3=1 items=0 ppid=3663 pid=3792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.615000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 13 23:40:38.617000 audit[3793]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3793 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.617000 audit[3793]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffff94bb10 a2=0 a3=1 items=0 ppid=3663 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.617000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 13 23:40:38.623000 audit[3795]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3795 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.623000 audit[3795]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffedf3f7b0 a2=0 a3=1 items=0 ppid=3663 pid=3795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.623000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 13 23:40:38.631000 audit[3798]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3798 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.631000 audit[3798]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff9f1af60 a2=0 a3=1 items=0 ppid=3663 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.631000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 13 23:40:38.639000 audit[3801]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3801 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.639000 audit[3801]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc88a4ac0 a2=0 a3=1 items=0 ppid=3663 pid=3801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.639000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Jan 13 23:40:38.642000 audit[3802]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3802 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.642000 audit[3802]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd628a490 a2=0 a3=1 items=0 ppid=3663 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.642000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 13 23:40:38.647000 audit[3804]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3804 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.647000 audit[3804]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffc8c58860 a2=0 a3=1 items=0 ppid=3663 pid=3804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.647000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 13 23:40:38.654000 audit[3807]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3807 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.654000 audit[3807]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd293eee0 a2=0 a3=1 items=0 ppid=3663 pid=3807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.654000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 13 23:40:38.657000 audit[3808]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3808 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.657000 audit[3808]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff80c09d0 a2=0 a3=1 items=0 ppid=3663 pid=3808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.657000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 13 23:40:38.662000 audit[3810]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3810 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.662000 audit[3810]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffffa222bf0 a2=0 a3=1 items=0 ppid=3663 pid=3810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.662000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 13 23:40:38.665000 audit[3811]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3811 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.665000 audit[3811]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc3e99e80 a2=0 a3=1 items=0 ppid=3663 pid=3811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.665000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 13 23:40:38.671000 audit[3813]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3813 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.671000 audit[3813]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffcfd00120 a2=0 a3=1 items=0 ppid=3663 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.671000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 13 23:40:38.686000 audit[3816]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3816 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:40:38.686000 audit[3816]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc0f0e590 a2=0 a3=1 items=0 ppid=3663 pid=3816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.686000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 13 23:40:38.698000 audit[3818]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3818 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 13 23:40:38.698000 audit[3818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffd47f1b20 a2=0 a3=1 items=0 ppid=3663 pid=3818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.698000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:40:38.699000 audit[3818]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3818 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 13 23:40:38.699000 audit[3818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffd47f1b20 a2=0 a3=1 items=0 ppid=3663 pid=3818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:38.699000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:40:39.184515 kubelet[3516]: I0113 23:40:39.183626 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bf9tw" podStartSLOduration=2.183603153 podStartE2EDuration="2.183603153s" podCreationTimestamp="2026-01-13 23:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:40:38.540311127 +0000 UTC m=+6.416337499" watchObservedRunningTime="2026-01-13 23:40:39.183603153 +0000 UTC m=+7.059629501" Jan 13 23:40:39.204689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount953641066.mount: Deactivated successfully. Jan 13 23:40:40.168183 containerd[1894]: time="2026-01-13T23:40:40.167979314Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:40:40.170703 containerd[1894]: time="2026-01-13T23:40:40.170614895Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Jan 13 23:40:40.172103 containerd[1894]: time="2026-01-13T23:40:40.172047415Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:40:40.175354 containerd[1894]: time="2026-01-13T23:40:40.175279466Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:40:40.178178 containerd[1894]: time="2026-01-13T23:40:40.178106015Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.348644928s" Jan 13 23:40:40.178264 containerd[1894]: time="2026-01-13T23:40:40.178186551Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 13 23:40:40.190530 containerd[1894]: time="2026-01-13T23:40:40.190406846Z" level=info msg="CreateContainer within sandbox \"bc7fd7e37b387e2d45d26f51cbb4118434a0c7d021446dd300ca7f6fb4bb93f1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 23:40:40.208228 containerd[1894]: time="2026-01-13T23:40:40.208158078Z" level=info msg="Container 81ef87d2e5912f8f8555acc3ac6902587a705b8765b88b908d46c811b6104e40: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:40:40.214381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2104213282.mount: Deactivated successfully. Jan 13 23:40:40.225487 containerd[1894]: time="2026-01-13T23:40:40.225415562Z" level=info msg="CreateContainer within sandbox \"bc7fd7e37b387e2d45d26f51cbb4118434a0c7d021446dd300ca7f6fb4bb93f1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"81ef87d2e5912f8f8555acc3ac6902587a705b8765b88b908d46c811b6104e40\"" Jan 13 23:40:40.227167 containerd[1894]: time="2026-01-13T23:40:40.227076676Z" level=info msg="StartContainer for \"81ef87d2e5912f8f8555acc3ac6902587a705b8765b88b908d46c811b6104e40\"" Jan 13 23:40:40.229946 containerd[1894]: time="2026-01-13T23:40:40.229861852Z" level=info msg="connecting to shim 81ef87d2e5912f8f8555acc3ac6902587a705b8765b88b908d46c811b6104e40" address="unix:///run/containerd/s/04cae5463c0309a3f4188c39042ee08cf5bf6db9ce7b9f3b51ebb568eca402ea" protocol=ttrpc version=3 Jan 13 23:40:40.269492 systemd[1]: Started cri-containerd-81ef87d2e5912f8f8555acc3ac6902587a705b8765b88b908d46c811b6104e40.scope - libcontainer container 81ef87d2e5912f8f8555acc3ac6902587a705b8765b88b908d46c811b6104e40. Jan 13 23:40:40.296000 audit: BPF prog-id=152 op=LOAD Jan 13 23:40:40.297000 audit: BPF prog-id=153 op=LOAD Jan 13 23:40:40.297000 audit[3827]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3621 pid=3827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:40.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831656638376432653539313266386638353535616363336163363930 Jan 13 23:40:40.297000 audit: BPF prog-id=153 op=UNLOAD Jan 13 23:40:40.297000 audit[3827]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3621 pid=3827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:40.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831656638376432653539313266386638353535616363336163363930 Jan 13 23:40:40.297000 audit: BPF prog-id=154 op=LOAD Jan 13 23:40:40.297000 audit[3827]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3621 pid=3827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:40.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831656638376432653539313266386638353535616363336163363930 Jan 13 23:40:40.297000 audit: BPF prog-id=155 op=LOAD Jan 13 23:40:40.297000 audit[3827]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3621 pid=3827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:40.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831656638376432653539313266386638353535616363336163363930 Jan 13 23:40:40.297000 audit: BPF prog-id=155 op=UNLOAD Jan 13 23:40:40.297000 audit[3827]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3621 pid=3827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:40.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831656638376432653539313266386638353535616363336163363930 Jan 13 23:40:40.297000 audit: BPF prog-id=154 op=UNLOAD Jan 13 23:40:40.297000 audit[3827]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3621 pid=3827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:40.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831656638376432653539313266386638353535616363336163363930 Jan 13 23:40:40.297000 audit: BPF prog-id=156 op=LOAD Jan 13 23:40:40.297000 audit[3827]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3621 pid=3827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:40.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831656638376432653539313266386638353535616363336163363930 Jan 13 23:40:40.335591 containerd[1894]: time="2026-01-13T23:40:40.335546877Z" level=info msg="StartContainer for \"81ef87d2e5912f8f8555acc3ac6902587a705b8765b88b908d46c811b6104e40\" returns successfully" Jan 13 23:40:47.589489 sudo[2250]: pam_unix(sudo:session): session closed for user root Jan 13 23:40:47.589000 audit[2250]: USER_END pid=2250 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:40:47.595968 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 13 23:40:47.596097 kernel: audit: type=1106 audit(1768347647.589:528): pid=2250 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:40:47.596000 audit[2250]: CRED_DISP pid=2250 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:40:47.604700 kernel: audit: type=1104 audit(1768347647.596:529): pid=2250 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:40:47.673649 sshd[2249]: Connection closed by 20.161.92.111 port 58736 Jan 13 23:40:47.675203 sshd-session[2245]: pam_unix(sshd:session): session closed for user core Jan 13 23:40:47.679000 audit[2245]: USER_END pid=2245 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:40:47.691954 systemd[1]: sshd@6-172.31.24.127:22-20.161.92.111:58736.service: Deactivated successfully. Jan 13 23:40:47.679000 audit[2245]: CRED_DISP pid=2245 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:40:47.698935 kernel: audit: type=1106 audit(1768347647.679:530): pid=2245 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:40:47.699026 kernel: audit: type=1104 audit(1768347647.679:531): pid=2245 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:40:47.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.24.127:22-20.161.92.111:58736 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:40:47.700001 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 23:40:47.702123 systemd[1]: session-8.scope: Consumed 14.473s CPU time, 221.9M memory peak. Jan 13 23:40:47.704938 kernel: audit: type=1131 audit(1768347647.694:532): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.24.127:22-20.161.92.111:58736 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:40:47.717868 systemd-logind[1857]: Session 8 logged out. Waiting for processes to exit. Jan 13 23:40:47.723062 systemd-logind[1857]: Removed session 8. Jan 13 23:40:51.907000 audit[3908]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:40:51.907000 audit[3908]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe6f0b630 a2=0 a3=1 items=0 ppid=3663 pid=3908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:51.920968 kernel: audit: type=1325 audit(1768347651.907:533): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:40:51.921746 kernel: audit: type=1300 audit(1768347651.907:533): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe6f0b630 a2=0 a3=1 items=0 ppid=3663 pid=3908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:51.921823 kernel: audit: type=1327 audit(1768347651.907:533): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:40:51.907000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:40:51.925000 audit[3908]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:40:51.932937 kernel: audit: type=1325 audit(1768347651.925:534): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:40:51.925000 audit[3908]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe6f0b630 a2=0 a3=1 items=0 ppid=3663 pid=3908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:51.961764 kernel: audit: type=1300 audit(1768347651.925:534): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe6f0b630 a2=0 a3=1 items=0 ppid=3663 pid=3908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:51.925000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:40:53.048000 audit[3911]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3911 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:40:53.054679 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:40:53.054799 kernel: audit: type=1325 audit(1768347653.048:535): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3911 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:40:53.048000 audit[3911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffe40eb40 a2=0 a3=1 items=0 ppid=3663 pid=3911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:53.063713 kernel: audit: type=1300 audit(1768347653.048:535): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffe40eb40 a2=0 a3=1 items=0 ppid=3663 pid=3911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:53.048000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:40:53.068926 kernel: audit: type=1327 audit(1768347653.048:535): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:40:53.055000 audit[3911]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3911 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:40:53.072719 kernel: audit: type=1325 audit(1768347653.055:536): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3911 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:40:53.055000 audit[3911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffe40eb40 a2=0 a3=1 items=0 ppid=3663 pid=3911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:53.080098 kernel: audit: type=1300 audit(1768347653.055:536): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffe40eb40 a2=0 a3=1 items=0 ppid=3663 pid=3911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:40:53.084771 kernel: audit: type=1327 audit(1768347653.055:536): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:40:53.055000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:05.844000 audit[3914]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3914 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:05.844000 audit[3914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe2bfe2f0 a2=0 a3=1 items=0 ppid=3663 pid=3914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:05.860020 kernel: audit: type=1325 audit(1768347665.844:537): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3914 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:05.860266 kernel: audit: type=1300 audit(1768347665.844:537): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe2bfe2f0 a2=0 a3=1 items=0 ppid=3663 pid=3914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:05.844000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:05.866558 kernel: audit: type=1327 audit(1768347665.844:537): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:05.854000 audit[3914]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3914 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:05.871844 kernel: audit: type=1325 audit(1768347665.854:538): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3914 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:05.854000 audit[3914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe2bfe2f0 a2=0 a3=1 items=0 ppid=3663 pid=3914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:05.882173 kernel: audit: type=1300 audit(1768347665.854:538): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe2bfe2f0 a2=0 a3=1 items=0 ppid=3663 pid=3914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:05.854000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:05.888850 kernel: audit: type=1327 audit(1768347665.854:538): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:06.879000 audit[3916]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3916 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:06.879000 audit[3916]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffcb142d20 a2=0 a3=1 items=0 ppid=3663 pid=3916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:06.895044 kernel: audit: type=1325 audit(1768347666.879:539): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3916 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:06.895217 kernel: audit: type=1300 audit(1768347666.879:539): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffcb142d20 a2=0 a3=1 items=0 ppid=3663 pid=3916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:06.879000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:06.902571 kernel: audit: type=1327 audit(1768347666.879:539): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:06.907000 audit[3916]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3916 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:06.907000 audit[3916]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcb142d20 a2=0 a3=1 items=0 ppid=3663 pid=3916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:06.907000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:06.916953 kernel: audit: type=1325 audit(1768347666.907:540): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3916 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:07.944000 audit[3919]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:07.944000 audit[3919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffed7195b0 a2=0 a3=1 items=0 ppid=3663 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:07.944000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:07.949000 audit[3919]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:07.949000 audit[3919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffed7195b0 a2=0 a3=1 items=0 ppid=3663 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:07.949000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:10.458000 audit[3923]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3923 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:10.458000 audit[3923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd2597970 a2=0 a3=1 items=0 ppid=3663 pid=3923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:10.458000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:10.464000 audit[3923]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3923 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:10.464000 audit[3923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd2597970 a2=0 a3=1 items=0 ppid=3663 pid=3923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:10.464000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:10.508933 kubelet[3516]: I0113 23:41:10.508364 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-9nnqw" podStartSLOduration=31.155690903 podStartE2EDuration="33.508337827s" podCreationTimestamp="2026-01-13 23:40:37 +0000 UTC" firstStartedPulling="2026-01-13 23:40:37.826630708 +0000 UTC m=+5.702657068" lastFinishedPulling="2026-01-13 23:40:40.179277632 +0000 UTC m=+8.055303992" observedRunningTime="2026-01-13 23:40:40.51408347 +0000 UTC m=+8.390109830" watchObservedRunningTime="2026-01-13 23:41:10.508337827 +0000 UTC m=+38.384364187" Jan 13 23:41:10.533572 systemd[1]: Created slice kubepods-besteffort-pod4fe148f2_ff88_45cf_87ee_11cee17269e2.slice - libcontainer container kubepods-besteffort-pod4fe148f2_ff88_45cf_87ee_11cee17269e2.slice. Jan 13 23:41:10.612533 kubelet[3516]: I0113 23:41:10.612451 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fe148f2-ff88-45cf-87ee-11cee17269e2-tigera-ca-bundle\") pod \"calico-typha-69c6ccfcfc-w8hvx\" (UID: \"4fe148f2-ff88-45cf-87ee-11cee17269e2\") " pod="calico-system/calico-typha-69c6ccfcfc-w8hvx" Jan 13 23:41:10.613210 kubelet[3516]: I0113 23:41:10.613089 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4fe148f2-ff88-45cf-87ee-11cee17269e2-typha-certs\") pod \"calico-typha-69c6ccfcfc-w8hvx\" (UID: \"4fe148f2-ff88-45cf-87ee-11cee17269e2\") " pod="calico-system/calico-typha-69c6ccfcfc-w8hvx" Jan 13 23:41:10.613893 kubelet[3516]: I0113 23:41:10.613529 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fccd\" (UniqueName: \"kubernetes.io/projected/4fe148f2-ff88-45cf-87ee-11cee17269e2-kube-api-access-2fccd\") pod \"calico-typha-69c6ccfcfc-w8hvx\" (UID: \"4fe148f2-ff88-45cf-87ee-11cee17269e2\") " pod="calico-system/calico-typha-69c6ccfcfc-w8hvx" Jan 13 23:41:10.782968 systemd[1]: Created slice kubepods-besteffort-pod7f97d44e_d585_40d7_96d5_6dc0816d8a32.slice - libcontainer container kubepods-besteffort-pod7f97d44e_d585_40d7_96d5_6dc0816d8a32.slice. Jan 13 23:41:10.819462 kubelet[3516]: I0113 23:41:10.819375 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7f97d44e-d585-40d7-96d5-6dc0816d8a32-var-lib-calico\") pod \"calico-node-hnqxs\" (UID: \"7f97d44e-d585-40d7-96d5-6dc0816d8a32\") " pod="calico-system/calico-node-hnqxs" Jan 13 23:41:10.819638 kubelet[3516]: I0113 23:41:10.819471 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7f97d44e-d585-40d7-96d5-6dc0816d8a32-var-run-calico\") pod \"calico-node-hnqxs\" (UID: \"7f97d44e-d585-40d7-96d5-6dc0816d8a32\") " pod="calico-system/calico-node-hnqxs" Jan 13 23:41:10.819638 kubelet[3516]: I0113 23:41:10.819517 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7f97d44e-d585-40d7-96d5-6dc0816d8a32-cni-net-dir\") pod \"calico-node-hnqxs\" (UID: \"7f97d44e-d585-40d7-96d5-6dc0816d8a32\") " pod="calico-system/calico-node-hnqxs" Jan 13 23:41:10.819638 kubelet[3516]: I0113 23:41:10.819557 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f97d44e-d585-40d7-96d5-6dc0816d8a32-lib-modules\") pod \"calico-node-hnqxs\" (UID: \"7f97d44e-d585-40d7-96d5-6dc0816d8a32\") " pod="calico-system/calico-node-hnqxs" Jan 13 23:41:10.819638 kubelet[3516]: I0113 23:41:10.819599 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f97d44e-d585-40d7-96d5-6dc0816d8a32-tigera-ca-bundle\") pod \"calico-node-hnqxs\" (UID: \"7f97d44e-d585-40d7-96d5-6dc0816d8a32\") " pod="calico-system/calico-node-hnqxs" Jan 13 23:41:10.819638 kubelet[3516]: I0113 23:41:10.819637 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7f97d44e-d585-40d7-96d5-6dc0816d8a32-cni-bin-dir\") pod \"calico-node-hnqxs\" (UID: \"7f97d44e-d585-40d7-96d5-6dc0816d8a32\") " pod="calico-system/calico-node-hnqxs" Jan 13 23:41:10.819967 kubelet[3516]: I0113 23:41:10.819685 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r52h\" (UniqueName: \"kubernetes.io/projected/7f97d44e-d585-40d7-96d5-6dc0816d8a32-kube-api-access-2r52h\") pod \"calico-node-hnqxs\" (UID: \"7f97d44e-d585-40d7-96d5-6dc0816d8a32\") " pod="calico-system/calico-node-hnqxs" Jan 13 23:41:10.819967 kubelet[3516]: I0113 23:41:10.819728 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7f97d44e-d585-40d7-96d5-6dc0816d8a32-flexvol-driver-host\") pod \"calico-node-hnqxs\" (UID: \"7f97d44e-d585-40d7-96d5-6dc0816d8a32\") " pod="calico-system/calico-node-hnqxs" Jan 13 23:41:10.819967 kubelet[3516]: I0113 23:41:10.819766 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7f97d44e-d585-40d7-96d5-6dc0816d8a32-node-certs\") pod \"calico-node-hnqxs\" (UID: \"7f97d44e-d585-40d7-96d5-6dc0816d8a32\") " pod="calico-system/calico-node-hnqxs" Jan 13 23:41:10.819967 kubelet[3516]: I0113 23:41:10.819808 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7f97d44e-d585-40d7-96d5-6dc0816d8a32-policysync\") pod \"calico-node-hnqxs\" (UID: \"7f97d44e-d585-40d7-96d5-6dc0816d8a32\") " pod="calico-system/calico-node-hnqxs" Jan 13 23:41:10.819967 kubelet[3516]: I0113 23:41:10.819855 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7f97d44e-d585-40d7-96d5-6dc0816d8a32-cni-log-dir\") pod \"calico-node-hnqxs\" (UID: \"7f97d44e-d585-40d7-96d5-6dc0816d8a32\") " pod="calico-system/calico-node-hnqxs" Jan 13 23:41:10.822443 kubelet[3516]: I0113 23:41:10.822372 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f97d44e-d585-40d7-96d5-6dc0816d8a32-xtables-lock\") pod \"calico-node-hnqxs\" (UID: \"7f97d44e-d585-40d7-96d5-6dc0816d8a32\") " pod="calico-system/calico-node-hnqxs" Jan 13 23:41:10.847321 containerd[1894]: time="2026-01-13T23:41:10.847217204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69c6ccfcfc-w8hvx,Uid:4fe148f2-ff88-45cf-87ee-11cee17269e2,Namespace:calico-system,Attempt:0,}" Jan 13 23:41:10.911260 containerd[1894]: time="2026-01-13T23:41:10.911177789Z" level=info msg="connecting to shim ac831407d28a40b6290603e326b18031dfdc69e0d98d87833312c14667c06cab" address="unix:///run/containerd/s/a68401ecc8d1a95c7004c449af01c02f174fb31790091e74f47c4e4cd025eca7" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:41:10.951611 kubelet[3516]: E0113 23:41:10.951556 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:10.951611 kubelet[3516]: W0113 23:41:10.951599 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:10.952351 kubelet[3516]: E0113 23:41:10.951640 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:10.978943 kubelet[3516]: E0113 23:41:10.977402 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:10.980884 kubelet[3516]: W0113 23:41:10.979077 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:10.980884 kubelet[3516]: E0113 23:41:10.979407 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:10.983865 kubelet[3516]: E0113 23:41:10.983407 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p84n5" podUID="e305c05b-4fdf-40a3-854a-8a106f493072" Jan 13 23:41:11.006267 kubelet[3516]: E0113 23:41:11.006139 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.006267 kubelet[3516]: W0113 23:41:11.006259 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.006520 kubelet[3516]: E0113 23:41:11.006301 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.009441 kubelet[3516]: E0113 23:41:11.009384 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.009441 kubelet[3516]: W0113 23:41:11.009427 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.009691 kubelet[3516]: E0113 23:41:11.009465 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.018089 kubelet[3516]: E0113 23:41:11.018020 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.019588 kubelet[3516]: W0113 23:41:11.019483 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.019588 kubelet[3516]: E0113 23:41:11.019554 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.023242 kubelet[3516]: E0113 23:41:11.023184 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.023242 kubelet[3516]: W0113 23:41:11.023226 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.023447 kubelet[3516]: E0113 23:41:11.023267 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.024196 kubelet[3516]: E0113 23:41:11.024135 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.024196 kubelet[3516]: W0113 23:41:11.024179 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.024431 kubelet[3516]: E0113 23:41:11.024217 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.025252 kubelet[3516]: E0113 23:41:11.025189 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.025252 kubelet[3516]: W0113 23:41:11.025236 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.025479 kubelet[3516]: E0113 23:41:11.025275 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.026638 systemd[1]: Started cri-containerd-ac831407d28a40b6290603e326b18031dfdc69e0d98d87833312c14667c06cab.scope - libcontainer container ac831407d28a40b6290603e326b18031dfdc69e0d98d87833312c14667c06cab. Jan 13 23:41:11.028974 kubelet[3516]: E0113 23:41:11.027799 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.028974 kubelet[3516]: W0113 23:41:11.027845 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.029322 kubelet[3516]: E0113 23:41:11.027884 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.035558 kubelet[3516]: E0113 23:41:11.034300 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.035558 kubelet[3516]: W0113 23:41:11.034344 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.035558 kubelet[3516]: E0113 23:41:11.034382 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.035558 kubelet[3516]: E0113 23:41:11.035245 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.035558 kubelet[3516]: W0113 23:41:11.035277 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.035558 kubelet[3516]: E0113 23:41:11.035313 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.039799 kubelet[3516]: E0113 23:41:11.039739 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.039799 kubelet[3516]: W0113 23:41:11.039776 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.041298 kubelet[3516]: E0113 23:41:11.039813 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.041298 kubelet[3516]: E0113 23:41:11.040782 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.041298 kubelet[3516]: W0113 23:41:11.040809 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.041298 kubelet[3516]: E0113 23:41:11.040841 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.043310 kubelet[3516]: E0113 23:41:11.043234 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.044398 kubelet[3516]: W0113 23:41:11.044121 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.044398 kubelet[3516]: E0113 23:41:11.044189 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.045544 kubelet[3516]: E0113 23:41:11.045452 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.045544 kubelet[3516]: W0113 23:41:11.045498 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.045544 kubelet[3516]: E0113 23:41:11.045534 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.050958 kubelet[3516]: E0113 23:41:11.050863 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.050958 kubelet[3516]: W0113 23:41:11.050953 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.051206 kubelet[3516]: E0113 23:41:11.050993 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.052249 kubelet[3516]: E0113 23:41:11.052184 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.052249 kubelet[3516]: W0113 23:41:11.052231 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.053198 kubelet[3516]: E0113 23:41:11.052268 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.053602 kubelet[3516]: E0113 23:41:11.053533 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.053602 kubelet[3516]: W0113 23:41:11.053581 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.053777 kubelet[3516]: E0113 23:41:11.053617 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.055783 kubelet[3516]: E0113 23:41:11.055722 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.055955 kubelet[3516]: W0113 23:41:11.055884 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.056043 kubelet[3516]: E0113 23:41:11.055972 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.057356 kubelet[3516]: E0113 23:41:11.057289 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.057356 kubelet[3516]: W0113 23:41:11.057337 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.057570 kubelet[3516]: E0113 23:41:11.057376 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.058804 kubelet[3516]: E0113 23:41:11.058728 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.058804 kubelet[3516]: W0113 23:41:11.058778 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.059053 kubelet[3516]: E0113 23:41:11.058821 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.059891 kubelet[3516]: E0113 23:41:11.059826 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.059891 kubelet[3516]: W0113 23:41:11.059869 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.060974 kubelet[3516]: E0113 23:41:11.060521 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.061178 kubelet[3516]: E0113 23:41:11.061123 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.061178 kubelet[3516]: W0113 23:41:11.061170 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.061302 kubelet[3516]: E0113 23:41:11.061206 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.062797 kubelet[3516]: E0113 23:41:11.062600 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.062797 kubelet[3516]: W0113 23:41:11.062780 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.063040 kubelet[3516]: E0113 23:41:11.062817 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.063040 kubelet[3516]: I0113 23:41:11.062871 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e305c05b-4fdf-40a3-854a-8a106f493072-socket-dir\") pod \"csi-node-driver-p84n5\" (UID: \"e305c05b-4fdf-40a3-854a-8a106f493072\") " pod="calico-system/csi-node-driver-p84n5" Jan 13 23:41:11.064495 kubelet[3516]: E0113 23:41:11.064429 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.064495 kubelet[3516]: W0113 23:41:11.064475 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.064711 kubelet[3516]: E0113 23:41:11.064512 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.065951 kubelet[3516]: E0113 23:41:11.065858 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.066109 kubelet[3516]: W0113 23:41:11.066070 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.066189 kubelet[3516]: E0113 23:41:11.066111 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.067177 kubelet[3516]: E0113 23:41:11.067119 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.067177 kubelet[3516]: W0113 23:41:11.067162 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.067392 kubelet[3516]: E0113 23:41:11.067218 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.067392 kubelet[3516]: I0113 23:41:11.067274 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e305c05b-4fdf-40a3-854a-8a106f493072-registration-dir\") pod \"csi-node-driver-p84n5\" (UID: \"e305c05b-4fdf-40a3-854a-8a106f493072\") " pod="calico-system/csi-node-driver-p84n5" Jan 13 23:41:11.068522 kubelet[3516]: E0113 23:41:11.068463 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.068522 kubelet[3516]: W0113 23:41:11.068507 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.068731 kubelet[3516]: E0113 23:41:11.068547 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.069464 kubelet[3516]: I0113 23:41:11.068727 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e305c05b-4fdf-40a3-854a-8a106f493072-varrun\") pod \"csi-node-driver-p84n5\" (UID: \"e305c05b-4fdf-40a3-854a-8a106f493072\") " pod="calico-system/csi-node-driver-p84n5" Jan 13 23:41:11.070960 kubelet[3516]: E0113 23:41:11.069734 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.070960 kubelet[3516]: W0113 23:41:11.069764 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.070960 kubelet[3516]: E0113 23:41:11.069795 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.071247 kubelet[3516]: E0113 23:41:11.071036 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.071319 kubelet[3516]: W0113 23:41:11.071253 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.071482 kubelet[3516]: E0113 23:41:11.071288 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.074710 kubelet[3516]: E0113 23:41:11.074651 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.074710 kubelet[3516]: W0113 23:41:11.074694 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.074978 kubelet[3516]: E0113 23:41:11.074730 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.075100 kubelet[3516]: I0113 23:41:11.075051 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz8pg\" (UniqueName: \"kubernetes.io/projected/e305c05b-4fdf-40a3-854a-8a106f493072-kube-api-access-dz8pg\") pod \"csi-node-driver-p84n5\" (UID: \"e305c05b-4fdf-40a3-854a-8a106f493072\") " pod="calico-system/csi-node-driver-p84n5" Jan 13 23:41:11.076319 kubelet[3516]: E0113 23:41:11.076264 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.076319 kubelet[3516]: W0113 23:41:11.076314 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.076743 kubelet[3516]: E0113 23:41:11.076351 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.077345 kubelet[3516]: I0113 23:41:11.077232 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e305c05b-4fdf-40a3-854a-8a106f493072-kubelet-dir\") pod \"csi-node-driver-p84n5\" (UID: \"e305c05b-4fdf-40a3-854a-8a106f493072\") " pod="calico-system/csi-node-driver-p84n5" Jan 13 23:41:11.077762 kubelet[3516]: E0113 23:41:11.077620 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.077762 kubelet[3516]: W0113 23:41:11.077666 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.077762 kubelet[3516]: E0113 23:41:11.077707 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.079496 kubelet[3516]: E0113 23:41:11.079027 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.079496 kubelet[3516]: W0113 23:41:11.079230 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.079496 kubelet[3516]: E0113 23:41:11.079270 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.081583 kubelet[3516]: E0113 23:41:11.081513 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.081583 kubelet[3516]: W0113 23:41:11.081560 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.081583 kubelet[3516]: E0113 23:41:11.081597 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.084152 kubelet[3516]: E0113 23:41:11.083347 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.084152 kubelet[3516]: W0113 23:41:11.083393 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.084152 kubelet[3516]: E0113 23:41:11.083432 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.084441 kubelet[3516]: E0113 23:41:11.084312 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.084441 kubelet[3516]: W0113 23:41:11.084342 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.084441 kubelet[3516]: E0113 23:41:11.084388 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.086512 kubelet[3516]: E0113 23:41:11.085532 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.086512 kubelet[3516]: W0113 23:41:11.085580 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.086512 kubelet[3516]: E0113 23:41:11.085649 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.102316 containerd[1894]: time="2026-01-13T23:41:11.102169178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hnqxs,Uid:7f97d44e-d585-40d7-96d5-6dc0816d8a32,Namespace:calico-system,Attempt:0,}" Jan 13 23:41:11.160371 containerd[1894]: time="2026-01-13T23:41:11.160267077Z" level=info msg="connecting to shim 3cca2d8145f5c3f3a8431c52562d4ed82b2428ef046dd27c5357e7106b1e72b7" address="unix:///run/containerd/s/937641e1ba0f0117aa0e18b452b6da0c0c6a96f1a36a66dbcab68cdeb9486d29" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:41:11.178649 kubelet[3516]: E0113 23:41:11.178577 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.178649 kubelet[3516]: W0113 23:41:11.178624 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.178869 kubelet[3516]: E0113 23:41:11.178667 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.180403 kubelet[3516]: E0113 23:41:11.180334 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.180403 kubelet[3516]: W0113 23:41:11.180386 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.180679 kubelet[3516]: E0113 23:41:11.180436 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.182189 kubelet[3516]: E0113 23:41:11.182095 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.182189 kubelet[3516]: W0113 23:41:11.182142 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.182561 kubelet[3516]: E0113 23:41:11.182201 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.183946 kubelet[3516]: E0113 23:41:11.183827 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.184231 kubelet[3516]: W0113 23:41:11.183975 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.184231 kubelet[3516]: E0113 23:41:11.184018 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.188940 kubelet[3516]: E0113 23:41:11.186312 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.188940 kubelet[3516]: W0113 23:41:11.186359 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.188940 kubelet[3516]: E0113 23:41:11.186395 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.188940 kubelet[3516]: E0113 23:41:11.187143 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.188940 kubelet[3516]: W0113 23:41:11.187173 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.188940 kubelet[3516]: E0113 23:41:11.187209 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.188940 kubelet[3516]: E0113 23:41:11.188309 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.188940 kubelet[3516]: W0113 23:41:11.188341 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.188940 kubelet[3516]: E0113 23:41:11.188377 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.194296 kubelet[3516]: E0113 23:41:11.193456 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.194296 kubelet[3516]: W0113 23:41:11.193503 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.194296 kubelet[3516]: E0113 23:41:11.193541 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.198953 kubelet[3516]: E0113 23:41:11.198035 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.198953 kubelet[3516]: W0113 23:41:11.198081 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.198953 kubelet[3516]: E0113 23:41:11.198116 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.202784 kubelet[3516]: E0113 23:41:11.202291 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.202784 kubelet[3516]: W0113 23:41:11.202331 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.202784 kubelet[3516]: E0113 23:41:11.202367 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.204939 kubelet[3516]: E0113 23:41:11.204410 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.204939 kubelet[3516]: W0113 23:41:11.204450 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.204939 kubelet[3516]: E0113 23:41:11.204489 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.208085 kubelet[3516]: E0113 23:41:11.207813 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.208085 kubelet[3516]: W0113 23:41:11.207859 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.209443 kubelet[3516]: E0113 23:41:11.209209 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.213203 kubelet[3516]: E0113 23:41:11.213140 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.213203 kubelet[3516]: W0113 23:41:11.213187 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.213428 kubelet[3516]: E0113 23:41:11.213225 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.217087 kubelet[3516]: E0113 23:41:11.216976 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.217087 kubelet[3516]: W0113 23:41:11.217072 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.217337 kubelet[3516]: E0113 23:41:11.217112 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.218333 kubelet[3516]: E0113 23:41:11.218282 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.218333 kubelet[3516]: W0113 23:41:11.218329 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.218514 kubelet[3516]: E0113 23:41:11.218367 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.221230 kubelet[3516]: E0113 23:41:11.221163 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.221230 kubelet[3516]: W0113 23:41:11.221210 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.221439 kubelet[3516]: E0113 23:41:11.221246 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.223707 kubelet[3516]: E0113 23:41:11.223567 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.224142 kubelet[3516]: W0113 23:41:11.224080 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.224257 kubelet[3516]: E0113 23:41:11.224157 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.226792 kubelet[3516]: E0113 23:41:11.226666 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.226792 kubelet[3516]: W0113 23:41:11.226762 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.227385 kubelet[3516]: E0113 23:41:11.226801 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.228794 kubelet[3516]: E0113 23:41:11.228683 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.229038 kubelet[3516]: W0113 23:41:11.228859 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.229266 kubelet[3516]: E0113 23:41:11.229200 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.232145 kubelet[3516]: E0113 23:41:11.232072 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.232145 kubelet[3516]: W0113 23:41:11.232119 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.232512 kubelet[3516]: E0113 23:41:11.232156 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.233774 kubelet[3516]: E0113 23:41:11.233706 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.233774 kubelet[3516]: W0113 23:41:11.233754 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.234011 kubelet[3516]: E0113 23:41:11.233793 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.236714 kubelet[3516]: E0113 23:41:11.236642 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.236714 kubelet[3516]: W0113 23:41:11.236692 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.236948 kubelet[3516]: E0113 23:41:11.236731 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.237681 kubelet[3516]: E0113 23:41:11.237629 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.237681 kubelet[3516]: W0113 23:41:11.237669 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.237877 kubelet[3516]: E0113 23:41:11.237704 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.241461 kubelet[3516]: E0113 23:41:11.241401 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.241461 kubelet[3516]: W0113 23:41:11.241446 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.241662 kubelet[3516]: E0113 23:41:11.241483 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.244347 kubelet[3516]: E0113 23:41:11.244266 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.244347 kubelet[3516]: W0113 23:41:11.244352 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.244572 kubelet[3516]: E0113 23:41:11.244391 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.268735 systemd[1]: Started cri-containerd-3cca2d8145f5c3f3a8431c52562d4ed82b2428ef046dd27c5357e7106b1e72b7.scope - libcontainer container 3cca2d8145f5c3f3a8431c52562d4ed82b2428ef046dd27c5357e7106b1e72b7. Jan 13 23:41:11.270250 kubelet[3516]: E0113 23:41:11.270193 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:11.270250 kubelet[3516]: W0113 23:41:11.270239 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:11.270505 kubelet[3516]: E0113 23:41:11.270279 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:11.302851 kernel: kauditd_printk_skb: 14 callbacks suppressed Jan 13 23:41:11.303039 kernel: audit: type=1334 audit(1768347671.298:545): prog-id=157 op=LOAD Jan 13 23:41:11.298000 audit: BPF prog-id=157 op=LOAD Jan 13 23:41:11.308000 audit: BPF prog-id=158 op=LOAD Jan 13 23:41:11.308000 audit[3948]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3937 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:11.329623 kernel: audit: type=1334 audit(1768347671.308:546): prog-id=158 op=LOAD Jan 13 23:41:11.329765 kernel: audit: type=1300 audit(1768347671.308:546): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3937 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:11.329810 kernel: audit: type=1327 audit(1768347671.308:546): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163383331343037643238613430623632393036303365333236623138 Jan 13 23:41:11.308000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163383331343037643238613430623632393036303365333236623138 Jan 13 23:41:11.309000 audit: BPF prog-id=158 op=UNLOAD Jan 13 23:41:11.334576 kernel: audit: type=1334 audit(1768347671.309:547): prog-id=158 op=UNLOAD Jan 13 23:41:11.334930 kernel: audit: type=1300 audit(1768347671.309:547): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3937 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:11.309000 audit[3948]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3937 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:11.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163383331343037643238613430623632393036303365333236623138 Jan 13 23:41:11.355953 kernel: audit: type=1327 audit(1768347671.309:547): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163383331343037643238613430623632393036303365333236623138 Jan 13 23:41:11.356092 kernel: audit: type=1334 audit(1768347671.312:548): prog-id=159 op=LOAD Jan 13 23:41:11.356141 kernel: audit: type=1300 audit(1768347671.312:548): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3937 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:11.312000 audit: BPF prog-id=159 op=LOAD Jan 13 23:41:11.312000 audit[3948]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3937 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:11.362378 kernel: audit: type=1327 audit(1768347671.312:548): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163383331343037643238613430623632393036303365333236623138 Jan 13 23:41:11.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163383331343037643238613430623632393036303365333236623138 Jan 13 23:41:11.343000 audit: BPF prog-id=160 op=LOAD Jan 13 23:41:11.343000 audit[3948]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3937 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:11.343000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163383331343037643238613430623632393036303365333236623138 Jan 13 23:41:11.358000 audit: BPF prog-id=160 op=UNLOAD Jan 13 23:41:11.358000 audit[3948]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3937 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:11.358000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163383331343037643238613430623632393036303365333236623138 Jan 13 23:41:11.358000 audit: BPF prog-id=159 op=UNLOAD Jan 13 23:41:11.358000 audit[3948]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3937 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:11.358000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163383331343037643238613430623632393036303365333236623138 Jan 13 23:41:11.358000 audit: BPF prog-id=161 op=LOAD Jan 13 23:41:11.358000 audit[3948]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3937 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:11.358000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163383331343037643238613430623632393036303365333236623138 Jan 13 23:41:11.438000 audit: BPF prog-id=162 op=LOAD Jan 13 23:41:11.441000 audit: BPF prog-id=163 op=LOAD Jan 13 23:41:11.441000 audit[4042]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=4026 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:11.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363636132643831343566356333663361383433316335323536326434 Jan 13 23:41:11.441000 audit: BPF prog-id=163 op=UNLOAD Jan 13 23:41:11.441000 audit[4042]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4026 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:11.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363636132643831343566356333663361383433316335323536326434 Jan 13 23:41:11.447000 audit: BPF prog-id=164 op=LOAD Jan 13 23:41:11.447000 audit[4042]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=4026 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:11.447000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363636132643831343566356333663361383433316335323536326434 Jan 13 23:41:11.447000 audit: BPF prog-id=165 op=LOAD Jan 13 23:41:11.447000 audit[4042]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=4026 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:11.447000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363636132643831343566356333663361383433316335323536326434 Jan 13 23:41:11.452000 audit: BPF prog-id=165 op=UNLOAD Jan 13 23:41:11.452000 audit[4042]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4026 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:11.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363636132643831343566356333663361383433316335323536326434 Jan 13 23:41:11.452000 audit: BPF prog-id=164 op=UNLOAD Jan 13 23:41:11.452000 audit[4042]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4026 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:11.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363636132643831343566356333663361383433316335323536326434 Jan 13 23:41:11.452000 audit: BPF prog-id=166 op=LOAD Jan 13 23:41:11.452000 audit[4042]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=4026 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:11.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363636132643831343566356333663361383433316335323536326434 Jan 13 23:41:11.514301 containerd[1894]: time="2026-01-13T23:41:11.514079277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69c6ccfcfc-w8hvx,Uid:4fe148f2-ff88-45cf-87ee-11cee17269e2,Namespace:calico-system,Attempt:0,} returns sandbox id \"ac831407d28a40b6290603e326b18031dfdc69e0d98d87833312c14667c06cab\"" Jan 13 23:41:11.522512 containerd[1894]: time="2026-01-13T23:41:11.522462827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 13 23:41:11.547000 audit[4098]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=4098 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:11.547000 audit[4098]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd1f3db20 a2=0 a3=1 items=0 ppid=3663 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:11.547000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:11.554000 audit[4098]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=4098 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:11.554000 audit[4098]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd1f3db20 a2=0 a3=1 items=0 ppid=3663 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:11.554000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:11.556566 containerd[1894]: time="2026-01-13T23:41:11.553459966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hnqxs,Uid:7f97d44e-d585-40d7-96d5-6dc0816d8a32,Namespace:calico-system,Attempt:0,} returns sandbox id \"3cca2d8145f5c3f3a8431c52562d4ed82b2428ef046dd27c5357e7106b1e72b7\"" Jan 13 23:41:12.401427 kubelet[3516]: E0113 23:41:12.401329 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p84n5" podUID="e305c05b-4fdf-40a3-854a-8a106f493072" Jan 13 23:41:12.968608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1766134611.mount: Deactivated successfully. Jan 13 23:41:13.952033 containerd[1894]: time="2026-01-13T23:41:13.951888146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:41:13.955536 containerd[1894]: time="2026-01-13T23:41:13.955141123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=0" Jan 13 23:41:13.957931 containerd[1894]: time="2026-01-13T23:41:13.957802830Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:41:13.962766 containerd[1894]: time="2026-01-13T23:41:13.962682980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:41:13.964473 containerd[1894]: time="2026-01-13T23:41:13.964398794Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.441817828s" Jan 13 23:41:13.964885 containerd[1894]: time="2026-01-13T23:41:13.964714264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 13 23:41:13.967071 containerd[1894]: time="2026-01-13T23:41:13.966832398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 13 23:41:14.004919 containerd[1894]: time="2026-01-13T23:41:14.004814556Z" level=info msg="CreateContainer within sandbox \"ac831407d28a40b6290603e326b18031dfdc69e0d98d87833312c14667c06cab\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 23:41:14.032652 containerd[1894]: time="2026-01-13T23:41:14.032518725Z" level=info msg="Container 17034e9a83836e802552aeae691d89f9bb2a69ba991f311ed6f2e0511fec3f68: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:41:14.039551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4256775083.mount: Deactivated successfully. Jan 13 23:41:14.057324 containerd[1894]: time="2026-01-13T23:41:14.057259130Z" level=info msg="CreateContainer within sandbox \"ac831407d28a40b6290603e326b18031dfdc69e0d98d87833312c14667c06cab\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"17034e9a83836e802552aeae691d89f9bb2a69ba991f311ed6f2e0511fec3f68\"" Jan 13 23:41:14.059363 containerd[1894]: time="2026-01-13T23:41:14.058998415Z" level=info msg="StartContainer for \"17034e9a83836e802552aeae691d89f9bb2a69ba991f311ed6f2e0511fec3f68\"" Jan 13 23:41:14.062236 containerd[1894]: time="2026-01-13T23:41:14.062173738Z" level=info msg="connecting to shim 17034e9a83836e802552aeae691d89f9bb2a69ba991f311ed6f2e0511fec3f68" address="unix:///run/containerd/s/a68401ecc8d1a95c7004c449af01c02f174fb31790091e74f47c4e4cd025eca7" protocol=ttrpc version=3 Jan 13 23:41:14.106338 systemd[1]: Started cri-containerd-17034e9a83836e802552aeae691d89f9bb2a69ba991f311ed6f2e0511fec3f68.scope - libcontainer container 17034e9a83836e802552aeae691d89f9bb2a69ba991f311ed6f2e0511fec3f68. Jan 13 23:41:14.142000 audit: BPF prog-id=167 op=LOAD Jan 13 23:41:14.143000 audit: BPF prog-id=168 op=LOAD Jan 13 23:41:14.143000 audit[4109]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=3937 pid=4109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:14.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137303334653961383338333665383032353532616561653639316438 Jan 13 23:41:14.144000 audit: BPF prog-id=168 op=UNLOAD Jan 13 23:41:14.144000 audit[4109]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3937 pid=4109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:14.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137303334653961383338333665383032353532616561653639316438 Jan 13 23:41:14.144000 audit: BPF prog-id=169 op=LOAD Jan 13 23:41:14.144000 audit[4109]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3937 pid=4109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:14.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137303334653961383338333665383032353532616561653639316438 Jan 13 23:41:14.145000 audit: BPF prog-id=170 op=LOAD Jan 13 23:41:14.145000 audit[4109]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3937 pid=4109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:14.145000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137303334653961383338333665383032353532616561653639316438 Jan 13 23:41:14.145000 audit: BPF prog-id=170 op=UNLOAD Jan 13 23:41:14.145000 audit[4109]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3937 pid=4109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:14.145000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137303334653961383338333665383032353532616561653639316438 Jan 13 23:41:14.146000 audit: BPF prog-id=169 op=UNLOAD Jan 13 23:41:14.146000 audit[4109]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3937 pid=4109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:14.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137303334653961383338333665383032353532616561653639316438 Jan 13 23:41:14.146000 audit: BPF prog-id=171 op=LOAD Jan 13 23:41:14.146000 audit[4109]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3937 pid=4109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:14.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137303334653961383338333665383032353532616561653639316438 Jan 13 23:41:14.225110 containerd[1894]: time="2026-01-13T23:41:14.224733742Z" level=info msg="StartContainer for \"17034e9a83836e802552aeae691d89f9bb2a69ba991f311ed6f2e0511fec3f68\" returns successfully" Jan 13 23:41:14.396130 kubelet[3516]: E0113 23:41:14.395105 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p84n5" podUID="e305c05b-4fdf-40a3-854a-8a106f493072" Jan 13 23:41:14.682815 kubelet[3516]: I0113 23:41:14.682605 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-69c6ccfcfc-w8hvx" podStartSLOduration=2.235749785 podStartE2EDuration="4.682460863s" podCreationTimestamp="2026-01-13 23:41:10 +0000 UTC" firstStartedPulling="2026-01-13 23:41:11.519343872 +0000 UTC m=+39.395370244" lastFinishedPulling="2026-01-13 23:41:13.96605495 +0000 UTC m=+41.842081322" observedRunningTime="2026-01-13 23:41:14.681850538 +0000 UTC m=+42.557876922" watchObservedRunningTime="2026-01-13 23:41:14.682460863 +0000 UTC m=+42.558487343" Jan 13 23:41:14.694826 kubelet[3516]: E0113 23:41:14.694763 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.694826 kubelet[3516]: W0113 23:41:14.694810 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.695140 kubelet[3516]: E0113 23:41:14.694848 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.695814 kubelet[3516]: E0113 23:41:14.695532 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.695814 kubelet[3516]: W0113 23:41:14.695560 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.695814 kubelet[3516]: E0113 23:41:14.695635 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.697155 kubelet[3516]: E0113 23:41:14.697111 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.697155 kubelet[3516]: W0113 23:41:14.697152 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.697393 kubelet[3516]: E0113 23:41:14.697188 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.698038 kubelet[3516]: E0113 23:41:14.697990 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.698038 kubelet[3516]: W0113 23:41:14.698032 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.698240 kubelet[3516]: E0113 23:41:14.698068 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.700395 kubelet[3516]: E0113 23:41:14.699947 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.700395 kubelet[3516]: W0113 23:41:14.699996 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.700395 kubelet[3516]: E0113 23:41:14.700034 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.700638 kubelet[3516]: E0113 23:41:14.700527 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.700638 kubelet[3516]: W0113 23:41:14.700554 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.700638 kubelet[3516]: E0113 23:41:14.700582 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.701051 kubelet[3516]: E0113 23:41:14.701012 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.701051 kubelet[3516]: W0113 23:41:14.701046 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.701176 kubelet[3516]: E0113 23:41:14.701080 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.702631 kubelet[3516]: E0113 23:41:14.702571 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.702631 kubelet[3516]: W0113 23:41:14.702613 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.702846 kubelet[3516]: E0113 23:41:14.702647 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.703175 kubelet[3516]: E0113 23:41:14.703123 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.703175 kubelet[3516]: W0113 23:41:14.703158 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.703296 kubelet[3516]: E0113 23:41:14.703188 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.704178 kubelet[3516]: E0113 23:41:14.704130 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.704178 kubelet[3516]: W0113 23:41:14.704169 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.704598 kubelet[3516]: E0113 23:41:14.704204 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.704857 kubelet[3516]: E0113 23:41:14.704804 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.704857 kubelet[3516]: W0113 23:41:14.704838 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.705136 kubelet[3516]: E0113 23:41:14.704869 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.706448 kubelet[3516]: E0113 23:41:14.706402 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.706448 kubelet[3516]: W0113 23:41:14.706441 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.706867 kubelet[3516]: E0113 23:41:14.706478 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.707155 kubelet[3516]: E0113 23:41:14.707059 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.707155 kubelet[3516]: W0113 23:41:14.707085 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.707155 kubelet[3516]: E0113 23:41:14.707115 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.709039 kubelet[3516]: E0113 23:41:14.708976 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.709039 kubelet[3516]: W0113 23:41:14.709031 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.709488 kubelet[3516]: E0113 23:41:14.709068 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.710135 kubelet[3516]: E0113 23:41:14.710048 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.710135 kubelet[3516]: W0113 23:41:14.710086 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.710135 kubelet[3516]: E0113 23:41:14.710121 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.749709 kubelet[3516]: E0113 23:41:14.749657 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.749709 kubelet[3516]: W0113 23:41:14.749698 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.750046 kubelet[3516]: E0113 23:41:14.749733 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.750665 kubelet[3516]: E0113 23:41:14.750615 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.750665 kubelet[3516]: W0113 23:41:14.750654 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.751125 kubelet[3516]: E0113 23:41:14.750690 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.753043 kubelet[3516]: E0113 23:41:14.752961 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.753243 kubelet[3516]: W0113 23:41:14.753209 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.753481 kubelet[3516]: E0113 23:41:14.753399 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.755218 kubelet[3516]: E0113 23:41:14.755149 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.755218 kubelet[3516]: W0113 23:41:14.755207 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.755521 kubelet[3516]: E0113 23:41:14.755246 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.756278 kubelet[3516]: E0113 23:41:14.756227 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.757252 kubelet[3516]: W0113 23:41:14.756982 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.757695 kubelet[3516]: E0113 23:41:14.757451 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.759465 kubelet[3516]: E0113 23:41:14.759119 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.759465 kubelet[3516]: W0113 23:41:14.759154 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.759465 kubelet[3516]: E0113 23:41:14.759189 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.760247 kubelet[3516]: E0113 23:41:14.760208 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.760846 kubelet[3516]: W0113 23:41:14.760441 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.760846 kubelet[3516]: E0113 23:41:14.760486 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.762187 kubelet[3516]: E0113 23:41:14.761670 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.762187 kubelet[3516]: W0113 23:41:14.761705 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.762187 kubelet[3516]: E0113 23:41:14.761739 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.765214 kubelet[3516]: E0113 23:41:14.764341 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.765214 kubelet[3516]: W0113 23:41:14.764576 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.765214 kubelet[3516]: E0113 23:41:14.764613 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.765948 kubelet[3516]: E0113 23:41:14.765826 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.767245 kubelet[3516]: W0113 23:41:14.766941 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.767245 kubelet[3516]: E0113 23:41:14.766992 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.767829 kubelet[3516]: E0113 23:41:14.767794 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.768409 kubelet[3516]: W0113 23:41:14.768059 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.768954 kubelet[3516]: E0113 23:41:14.768629 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.771722 kubelet[3516]: E0113 23:41:14.771125 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.771722 kubelet[3516]: W0113 23:41:14.771171 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.771722 kubelet[3516]: E0113 23:41:14.771213 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.772674 kubelet[3516]: E0113 23:41:14.772282 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.773081 kubelet[3516]: W0113 23:41:14.773025 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.773737 kubelet[3516]: E0113 23:41:14.773207 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.775354 kubelet[3516]: E0113 23:41:14.775056 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.775354 kubelet[3516]: W0113 23:41:14.775091 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.775354 kubelet[3516]: E0113 23:41:14.775124 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.776526 kubelet[3516]: E0113 23:41:14.776186 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.777068 kubelet[3516]: W0113 23:41:14.776738 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.777068 kubelet[3516]: E0113 23:41:14.776790 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.780453 kubelet[3516]: E0113 23:41:14.779988 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.780453 kubelet[3516]: W0113 23:41:14.780030 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.780453 kubelet[3516]: E0113 23:41:14.780064 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.781712 kubelet[3516]: E0113 23:41:14.780941 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.781712 kubelet[3516]: W0113 23:41:14.780975 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.781712 kubelet[3516]: E0113 23:41:14.781008 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:14.782745 kubelet[3516]: E0113 23:41:14.782036 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:41:14.782745 kubelet[3516]: W0113 23:41:14.782062 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:41:14.782745 kubelet[3516]: E0113 23:41:14.782092 3516 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:41:15.236765 containerd[1894]: time="2026-01-13T23:41:15.236695213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:41:15.238795 containerd[1894]: time="2026-01-13T23:41:15.238628527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 13 23:41:15.242261 containerd[1894]: time="2026-01-13T23:41:15.241511156Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:41:15.246504 containerd[1894]: time="2026-01-13T23:41:15.246376719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:41:15.248885 containerd[1894]: time="2026-01-13T23:41:15.248230098Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.281328449s" Jan 13 23:41:15.248885 containerd[1894]: time="2026-01-13T23:41:15.248296167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 13 23:41:15.258297 containerd[1894]: time="2026-01-13T23:41:15.258248012Z" level=info msg="CreateContainer within sandbox \"3cca2d8145f5c3f3a8431c52562d4ed82b2428ef046dd27c5357e7106b1e72b7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 23:41:15.278427 containerd[1894]: time="2026-01-13T23:41:15.278371221Z" level=info msg="Container 3ebfe9a0c2d1174a58ba5aba39306c25d82c17188415f9da000bd6f142a3aef9: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:41:15.286412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2534157641.mount: Deactivated successfully. Jan 13 23:41:15.304945 containerd[1894]: time="2026-01-13T23:41:15.304816717Z" level=info msg="CreateContainer within sandbox \"3cca2d8145f5c3f3a8431c52562d4ed82b2428ef046dd27c5357e7106b1e72b7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3ebfe9a0c2d1174a58ba5aba39306c25d82c17188415f9da000bd6f142a3aef9\"" Jan 13 23:41:15.305959 containerd[1894]: time="2026-01-13T23:41:15.305746234Z" level=info msg="StartContainer for \"3ebfe9a0c2d1174a58ba5aba39306c25d82c17188415f9da000bd6f142a3aef9\"" Jan 13 23:41:15.314723 containerd[1894]: time="2026-01-13T23:41:15.314659416Z" level=info msg="connecting to shim 3ebfe9a0c2d1174a58ba5aba39306c25d82c17188415f9da000bd6f142a3aef9" address="unix:///run/containerd/s/937641e1ba0f0117aa0e18b452b6da0c0c6a96f1a36a66dbcab68cdeb9486d29" protocol=ttrpc version=3 Jan 13 23:41:15.369301 systemd[1]: Started cri-containerd-3ebfe9a0c2d1174a58ba5aba39306c25d82c17188415f9da000bd6f142a3aef9.scope - libcontainer container 3ebfe9a0c2d1174a58ba5aba39306c25d82c17188415f9da000bd6f142a3aef9. Jan 13 23:41:15.497000 audit: BPF prog-id=172 op=LOAD Jan 13 23:41:15.497000 audit[4185]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4026 pid=4185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:15.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365626665396130633264313137346135386261356162613339333036 Jan 13 23:41:15.497000 audit: BPF prog-id=173 op=LOAD Jan 13 23:41:15.497000 audit[4185]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4026 pid=4185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:15.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365626665396130633264313137346135386261356162613339333036 Jan 13 23:41:15.498000 audit: BPF prog-id=173 op=UNLOAD Jan 13 23:41:15.498000 audit[4185]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4026 pid=4185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:15.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365626665396130633264313137346135386261356162613339333036 Jan 13 23:41:15.498000 audit: BPF prog-id=172 op=UNLOAD Jan 13 23:41:15.498000 audit[4185]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4026 pid=4185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:15.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365626665396130633264313137346135386261356162613339333036 Jan 13 23:41:15.498000 audit: BPF prog-id=174 op=LOAD Jan 13 23:41:15.498000 audit[4185]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4026 pid=4185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:15.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365626665396130633264313137346135386261356162613339333036 Jan 13 23:41:15.552432 containerd[1894]: time="2026-01-13T23:41:15.552338070Z" level=info msg="StartContainer for \"3ebfe9a0c2d1174a58ba5aba39306c25d82c17188415f9da000bd6f142a3aef9\" returns successfully" Jan 13 23:41:15.585094 systemd[1]: cri-containerd-3ebfe9a0c2d1174a58ba5aba39306c25d82c17188415f9da000bd6f142a3aef9.scope: Deactivated successfully. Jan 13 23:41:15.587000 audit: BPF prog-id=174 op=UNLOAD Jan 13 23:41:15.593253 containerd[1894]: time="2026-01-13T23:41:15.593056287Z" level=info msg="received container exit event container_id:\"3ebfe9a0c2d1174a58ba5aba39306c25d82c17188415f9da000bd6f142a3aef9\" id:\"3ebfe9a0c2d1174a58ba5aba39306c25d82c17188415f9da000bd6f142a3aef9\" pid:4199 exited_at:{seconds:1768347675 nanos:591756337}" Jan 13 23:41:15.655771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ebfe9a0c2d1174a58ba5aba39306c25d82c17188415f9da000bd6f142a3aef9-rootfs.mount: Deactivated successfully. Jan 13 23:41:15.810000 audit[4238]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=4238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:15.810000 audit[4238]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff6238320 a2=0 a3=1 items=0 ppid=3663 pid=4238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:15.810000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:15.816000 audit[4238]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=4238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:15.816000 audit[4238]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffff6238320 a2=0 a3=1 items=0 ppid=3663 pid=4238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:15.816000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:16.393525 kubelet[3516]: E0113 23:41:16.393386 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p84n5" podUID="e305c05b-4fdf-40a3-854a-8a106f493072" Jan 13 23:41:16.680539 containerd[1894]: time="2026-01-13T23:41:16.680187228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 13 23:41:18.398199 kubelet[3516]: E0113 23:41:18.397988 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p84n5" podUID="e305c05b-4fdf-40a3-854a-8a106f493072" Jan 13 23:41:19.575622 containerd[1894]: time="2026-01-13T23:41:19.575564342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:41:19.577516 containerd[1894]: time="2026-01-13T23:41:19.577441756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 13 23:41:19.578030 containerd[1894]: time="2026-01-13T23:41:19.577954796Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:41:19.581634 containerd[1894]: time="2026-01-13T23:41:19.581552046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:41:19.583222 containerd[1894]: time="2026-01-13T23:41:19.582978551Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.902723393s" Jan 13 23:41:19.583222 containerd[1894]: time="2026-01-13T23:41:19.583033731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 13 23:41:19.592146 containerd[1894]: time="2026-01-13T23:41:19.591885262Z" level=info msg="CreateContainer within sandbox \"3cca2d8145f5c3f3a8431c52562d4ed82b2428ef046dd27c5357e7106b1e72b7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 23:41:19.605282 containerd[1894]: time="2026-01-13T23:41:19.605229787Z" level=info msg="Container 38a0b3bb1348417ef9f9dabca16e937bf6ea9968bfa2d6d1299dcf79cfa0ce3d: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:41:19.619935 containerd[1894]: time="2026-01-13T23:41:19.619825457Z" level=info msg="CreateContainer within sandbox \"3cca2d8145f5c3f3a8431c52562d4ed82b2428ef046dd27c5357e7106b1e72b7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"38a0b3bb1348417ef9f9dabca16e937bf6ea9968bfa2d6d1299dcf79cfa0ce3d\"" Jan 13 23:41:19.621217 containerd[1894]: time="2026-01-13T23:41:19.621092391Z" level=info msg="StartContainer for \"38a0b3bb1348417ef9f9dabca16e937bf6ea9968bfa2d6d1299dcf79cfa0ce3d\"" Jan 13 23:41:19.625589 containerd[1894]: time="2026-01-13T23:41:19.625532894Z" level=info msg="connecting to shim 38a0b3bb1348417ef9f9dabca16e937bf6ea9968bfa2d6d1299dcf79cfa0ce3d" address="unix:///run/containerd/s/937641e1ba0f0117aa0e18b452b6da0c0c6a96f1a36a66dbcab68cdeb9486d29" protocol=ttrpc version=3 Jan 13 23:41:19.671293 systemd[1]: Started cri-containerd-38a0b3bb1348417ef9f9dabca16e937bf6ea9968bfa2d6d1299dcf79cfa0ce3d.scope - libcontainer container 38a0b3bb1348417ef9f9dabca16e937bf6ea9968bfa2d6d1299dcf79cfa0ce3d. Jan 13 23:41:19.770000 audit: BPF prog-id=175 op=LOAD Jan 13 23:41:19.775609 kernel: kauditd_printk_skb: 84 callbacks suppressed Jan 13 23:41:19.775686 kernel: audit: type=1334 audit(1768347679.770:579): prog-id=175 op=LOAD Jan 13 23:41:19.775733 kernel: audit: type=1300 audit(1768347679.770:579): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=4026 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:19.770000 audit[4247]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=4026 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:19.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338613062336262313334383431376566396639646162636131366539 Jan 13 23:41:19.787143 kernel: audit: type=1327 audit(1768347679.770:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338613062336262313334383431376566396639646162636131366539 Jan 13 23:41:19.770000 audit: BPF prog-id=176 op=LOAD Jan 13 23:41:19.788923 kernel: audit: type=1334 audit(1768347679.770:580): prog-id=176 op=LOAD Jan 13 23:41:19.770000 audit[4247]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=4026 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:19.795305 kernel: audit: type=1300 audit(1768347679.770:580): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=4026 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:19.795412 kernel: audit: type=1327 audit(1768347679.770:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338613062336262313334383431376566396639646162636131366539 Jan 13 23:41:19.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338613062336262313334383431376566396639646162636131366539 Jan 13 23:41:19.772000 audit: BPF prog-id=176 op=UNLOAD Jan 13 23:41:19.802734 kernel: audit: type=1334 audit(1768347679.772:581): prog-id=176 op=UNLOAD Jan 13 23:41:19.802952 kernel: audit: type=1300 audit(1768347679.772:581): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4026 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:19.772000 audit[4247]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4026 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:19.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338613062336262313334383431376566396639646162636131366539 Jan 13 23:41:19.816606 kernel: audit: type=1327 audit(1768347679.772:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338613062336262313334383431376566396639646162636131366539 Jan 13 23:41:19.818124 kernel: audit: type=1334 audit(1768347679.772:582): prog-id=175 op=UNLOAD Jan 13 23:41:19.772000 audit: BPF prog-id=175 op=UNLOAD Jan 13 23:41:19.772000 audit[4247]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4026 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:19.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338613062336262313334383431376566396639646162636131366539 Jan 13 23:41:19.772000 audit: BPF prog-id=177 op=LOAD Jan 13 23:41:19.772000 audit[4247]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=4026 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:19.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338613062336262313334383431376566396639646162636131366539 Jan 13 23:41:19.851001 containerd[1894]: time="2026-01-13T23:41:19.850726352Z" level=info msg="StartContainer for \"38a0b3bb1348417ef9f9dabca16e937bf6ea9968bfa2d6d1299dcf79cfa0ce3d\" returns successfully" Jan 13 23:41:20.394129 kubelet[3516]: E0113 23:41:20.393732 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p84n5" podUID="e305c05b-4fdf-40a3-854a-8a106f493072" Jan 13 23:41:20.917170 containerd[1894]: time="2026-01-13T23:41:20.916918109Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 23:41:20.922771 systemd[1]: cri-containerd-38a0b3bb1348417ef9f9dabca16e937bf6ea9968bfa2d6d1299dcf79cfa0ce3d.scope: Deactivated successfully. Jan 13 23:41:20.925110 systemd[1]: cri-containerd-38a0b3bb1348417ef9f9dabca16e937bf6ea9968bfa2d6d1299dcf79cfa0ce3d.scope: Consumed 995ms CPU time, 186.8M memory peak, 165.9M written to disk. Jan 13 23:41:20.925000 audit: BPF prog-id=177 op=UNLOAD Jan 13 23:41:20.928199 containerd[1894]: time="2026-01-13T23:41:20.928137944Z" level=info msg="received container exit event container_id:\"38a0b3bb1348417ef9f9dabca16e937bf6ea9968bfa2d6d1299dcf79cfa0ce3d\" id:\"38a0b3bb1348417ef9f9dabca16e937bf6ea9968bfa2d6d1299dcf79cfa0ce3d\" pid:4259 exited_at:{seconds:1768347680 nanos:926490288}" Jan 13 23:41:20.950199 kubelet[3516]: I0113 23:41:20.949981 3516 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 13 23:41:20.995808 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38a0b3bb1348417ef9f9dabca16e937bf6ea9968bfa2d6d1299dcf79cfa0ce3d-rootfs.mount: Deactivated successfully. Jan 13 23:41:21.079490 systemd[1]: Created slice kubepods-burstable-podb87a7357_7c47_49ad_871f_3e101a102b84.slice - libcontainer container kubepods-burstable-podb87a7357_7c47_49ad_871f_3e101a102b84.slice. Jan 13 23:41:21.109451 kubelet[3516]: I0113 23:41:21.109377 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b87a7357-7c47-49ad-871f-3e101a102b84-config-volume\") pod \"coredns-66bc5c9577-mslr6\" (UID: \"b87a7357-7c47-49ad-871f-3e101a102b84\") " pod="kube-system/coredns-66bc5c9577-mslr6" Jan 13 23:41:21.109451 kubelet[3516]: I0113 23:41:21.109447 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnwrn\" (UniqueName: \"kubernetes.io/projected/53d99ef2-7e93-4ffd-bfa2-64159cfed963-kube-api-access-hnwrn\") pod \"coredns-66bc5c9577-dpd56\" (UID: \"53d99ef2-7e93-4ffd-bfa2-64159cfed963\") " pod="kube-system/coredns-66bc5c9577-dpd56" Jan 13 23:41:21.109759 kubelet[3516]: I0113 23:41:21.109490 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk9kr\" (UniqueName: \"kubernetes.io/projected/c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a-kube-api-access-mk9kr\") pod \"calico-kube-controllers-6488775c94-xpm9n\" (UID: \"c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a\") " pod="calico-system/calico-kube-controllers-6488775c94-xpm9n" Jan 13 23:41:21.109759 kubelet[3516]: I0113 23:41:21.109540 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53d99ef2-7e93-4ffd-bfa2-64159cfed963-config-volume\") pod \"coredns-66bc5c9577-dpd56\" (UID: \"53d99ef2-7e93-4ffd-bfa2-64159cfed963\") " pod="kube-system/coredns-66bc5c9577-dpd56" Jan 13 23:41:21.109759 kubelet[3516]: I0113 23:41:21.109575 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a-tigera-ca-bundle\") pod \"calico-kube-controllers-6488775c94-xpm9n\" (UID: \"c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a\") " pod="calico-system/calico-kube-controllers-6488775c94-xpm9n" Jan 13 23:41:21.109759 kubelet[3516]: I0113 23:41:21.109617 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5jwm\" (UniqueName: \"kubernetes.io/projected/b87a7357-7c47-49ad-871f-3e101a102b84-kube-api-access-s5jwm\") pod \"coredns-66bc5c9577-mslr6\" (UID: \"b87a7357-7c47-49ad-871f-3e101a102b84\") " pod="kube-system/coredns-66bc5c9577-mslr6" Jan 13 23:41:21.119451 systemd[1]: Created slice kubepods-besteffort-podc516ddef_9b61_4137_ae4e_9c5e1b1d0b5a.slice - libcontainer container kubepods-besteffort-podc516ddef_9b61_4137_ae4e_9c5e1b1d0b5a.slice. Jan 13 23:41:21.150436 systemd[1]: Created slice kubepods-burstable-pod53d99ef2_7e93_4ffd_bfa2_64159cfed963.slice - libcontainer container kubepods-burstable-pod53d99ef2_7e93_4ffd_bfa2_64159cfed963.slice. Jan 13 23:41:21.176781 systemd[1]: Created slice kubepods-besteffort-pod0af43ea4_f4bb_4d0b_8dc0_4fb300220dfb.slice - libcontainer container kubepods-besteffort-pod0af43ea4_f4bb_4d0b_8dc0_4fb300220dfb.slice. Jan 13 23:41:21.203372 systemd[1]: Created slice kubepods-besteffort-podbc4e2f94_7e3c_446b_9bce_55e8c6abc38d.slice - libcontainer container kubepods-besteffort-podbc4e2f94_7e3c_446b_9bce_55e8c6abc38d.slice. Jan 13 23:41:21.224633 kubelet[3516]: I0113 23:41:21.224553 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b24cd04e-6152-4ecd-b62b-7b335c0d5f20-whisker-backend-key-pair\") pod \"whisker-7bd4554bf4-fb4sd\" (UID: \"b24cd04e-6152-4ecd-b62b-7b335c0d5f20\") " pod="calico-system/whisker-7bd4554bf4-fb4sd" Jan 13 23:41:21.224791 kubelet[3516]: I0113 23:41:21.224637 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bc4e2f94-7e3c-446b-9bce-55e8c6abc38d-calico-apiserver-certs\") pod \"calico-apiserver-7569cdf946-2r7qk\" (UID: \"bc4e2f94-7e3c-446b-9bce-55e8c6abc38d\") " pod="calico-apiserver/calico-apiserver-7569cdf946-2r7qk" Jan 13 23:41:21.224791 kubelet[3516]: I0113 23:41:21.224701 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrgnk\" (UniqueName: \"kubernetes.io/projected/497e3e66-1726-45f4-a990-23061cc5868e-kube-api-access-jrgnk\") pod \"goldmane-7c778bb748-xqbbk\" (UID: \"497e3e66-1726-45f4-a990-23061cc5868e\") " pod="calico-system/goldmane-7c778bb748-xqbbk" Jan 13 23:41:21.224791 kubelet[3516]: I0113 23:41:21.224740 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb-calico-apiserver-certs\") pod \"calico-apiserver-7569cdf946-hmfxf\" (UID: \"0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb\") " pod="calico-apiserver/calico-apiserver-7569cdf946-hmfxf" Jan 13 23:41:21.224791 kubelet[3516]: I0113 23:41:21.224781 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b24cd04e-6152-4ecd-b62b-7b335c0d5f20-whisker-ca-bundle\") pod \"whisker-7bd4554bf4-fb4sd\" (UID: \"b24cd04e-6152-4ecd-b62b-7b335c0d5f20\") " pod="calico-system/whisker-7bd4554bf4-fb4sd" Jan 13 23:41:21.226132 kubelet[3516]: I0113 23:41:21.224817 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ttns\" (UniqueName: \"kubernetes.io/projected/b24cd04e-6152-4ecd-b62b-7b335c0d5f20-kube-api-access-5ttns\") pod \"whisker-7bd4554bf4-fb4sd\" (UID: \"b24cd04e-6152-4ecd-b62b-7b335c0d5f20\") " pod="calico-system/whisker-7bd4554bf4-fb4sd" Jan 13 23:41:21.226132 kubelet[3516]: I0113 23:41:21.224863 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r24st\" (UniqueName: \"kubernetes.io/projected/0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb-kube-api-access-r24st\") pod \"calico-apiserver-7569cdf946-hmfxf\" (UID: \"0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb\") " pod="calico-apiserver/calico-apiserver-7569cdf946-hmfxf" Jan 13 23:41:21.228924 kubelet[3516]: I0113 23:41:21.228839 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/497e3e66-1726-45f4-a990-23061cc5868e-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-xqbbk\" (UID: \"497e3e66-1726-45f4-a990-23061cc5868e\") " pod="calico-system/goldmane-7c778bb748-xqbbk" Jan 13 23:41:21.229090 kubelet[3516]: I0113 23:41:21.228959 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/497e3e66-1726-45f4-a990-23061cc5868e-config\") pod \"goldmane-7c778bb748-xqbbk\" (UID: \"497e3e66-1726-45f4-a990-23061cc5868e\") " pod="calico-system/goldmane-7c778bb748-xqbbk" Jan 13 23:41:21.229090 kubelet[3516]: I0113 23:41:21.229012 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/497e3e66-1726-45f4-a990-23061cc5868e-goldmane-key-pair\") pod \"goldmane-7c778bb748-xqbbk\" (UID: \"497e3e66-1726-45f4-a990-23061cc5868e\") " pod="calico-system/goldmane-7c778bb748-xqbbk" Jan 13 23:41:21.229090 kubelet[3516]: I0113 23:41:21.229056 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxc2h\" (UniqueName: \"kubernetes.io/projected/bc4e2f94-7e3c-446b-9bce-55e8c6abc38d-kube-api-access-qxc2h\") pod \"calico-apiserver-7569cdf946-2r7qk\" (UID: \"bc4e2f94-7e3c-446b-9bce-55e8c6abc38d\") " pod="calico-apiserver/calico-apiserver-7569cdf946-2r7qk" Jan 13 23:41:21.237713 systemd[1]: Created slice kubepods-besteffort-podb24cd04e_6152_4ecd_b62b_7b335c0d5f20.slice - libcontainer container kubepods-besteffort-podb24cd04e_6152_4ecd_b62b_7b335c0d5f20.slice. Jan 13 23:41:21.266204 systemd[1]: Created slice kubepods-besteffort-pod497e3e66_1726_45f4_a990_23061cc5868e.slice - libcontainer container kubepods-besteffort-pod497e3e66_1726_45f4_a990_23061cc5868e.slice. Jan 13 23:41:21.481266 containerd[1894]: time="2026-01-13T23:41:21.481204243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mslr6,Uid:b87a7357-7c47-49ad-871f-3e101a102b84,Namespace:kube-system,Attempt:0,}" Jan 13 23:41:21.498059 containerd[1894]: time="2026-01-13T23:41:21.497989879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6488775c94-xpm9n,Uid:c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a,Namespace:calico-system,Attempt:0,}" Jan 13 23:41:21.500217 containerd[1894]: time="2026-01-13T23:41:21.499185221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7569cdf946-hmfxf,Uid:0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb,Namespace:calico-apiserver,Attempt:0,}" Jan 13 23:41:21.506671 containerd[1894]: time="2026-01-13T23:41:21.506605049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dpd56,Uid:53d99ef2-7e93-4ffd-bfa2-64159cfed963,Namespace:kube-system,Attempt:0,}" Jan 13 23:41:21.529526 containerd[1894]: time="2026-01-13T23:41:21.529369902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7569cdf946-2r7qk,Uid:bc4e2f94-7e3c-446b-9bce-55e8c6abc38d,Namespace:calico-apiserver,Attempt:0,}" Jan 13 23:41:21.564315 containerd[1894]: time="2026-01-13T23:41:21.564191804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bd4554bf4-fb4sd,Uid:b24cd04e-6152-4ecd-b62b-7b335c0d5f20,Namespace:calico-system,Attempt:0,}" Jan 13 23:41:21.578111 containerd[1894]: time="2026-01-13T23:41:21.578035983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xqbbk,Uid:497e3e66-1726-45f4-a990-23061cc5868e,Namespace:calico-system,Attempt:0,}" Jan 13 23:41:21.871170 containerd[1894]: time="2026-01-13T23:41:21.870967426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 13 23:41:22.139176 containerd[1894]: time="2026-01-13T23:41:22.138776061Z" level=error msg="Failed to destroy network for sandbox \"531ca5f0f6a1695db9982497f0be5f930b4c9a56742963bb732eac4966036bc0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:41:22.145344 systemd[1]: run-netns-cni\x2d7278dd03\x2d0d8a\x2d0a1d\x2d5cc4\x2d8499b2a31007.mount: Deactivated successfully. Jan 13 23:41:22.151660 containerd[1894]: time="2026-01-13T23:41:22.151550829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bd4554bf4-fb4sd,Uid:b24cd04e-6152-4ecd-b62b-7b335c0d5f20,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"531ca5f0f6a1695db9982497f0be5f930b4c9a56742963bb732eac4966036bc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:41:22.152420 kubelet[3516]: E0113 23:41:22.152048 3516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"531ca5f0f6a1695db9982497f0be5f930b4c9a56742963bb732eac4966036bc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:41:22.152420 kubelet[3516]: E0113 23:41:22.152152 3516 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"531ca5f0f6a1695db9982497f0be5f930b4c9a56742963bb732eac4966036bc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bd4554bf4-fb4sd" Jan 13 23:41:22.152420 kubelet[3516]: E0113 23:41:22.152186 3516 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"531ca5f0f6a1695db9982497f0be5f930b4c9a56742963bb732eac4966036bc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bd4554bf4-fb4sd" Jan 13 23:41:22.154551 kubelet[3516]: E0113 23:41:22.152284 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7bd4554bf4-fb4sd_calico-system(b24cd04e-6152-4ecd-b62b-7b335c0d5f20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7bd4554bf4-fb4sd_calico-system(b24cd04e-6152-4ecd-b62b-7b335c0d5f20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"531ca5f0f6a1695db9982497f0be5f930b4c9a56742963bb732eac4966036bc0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bd4554bf4-fb4sd" podUID="b24cd04e-6152-4ecd-b62b-7b335c0d5f20" Jan 13 23:41:22.161767 containerd[1894]: time="2026-01-13T23:41:22.161672151Z" level=error msg="Failed to destroy network for sandbox \"1317dd6158396dfa367bba55eef601b7f277ff5de813b25334e45825be3d9fac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:41:22.170593 systemd[1]: run-netns-cni\x2d84b0379d\x2d5262\x2d912a\x2d09b5\x2d65d6c2ab4c41.mount: Deactivated successfully. Jan 13 23:41:22.178662 containerd[1894]: time="2026-01-13T23:41:22.178543367Z" level=error msg="Failed to destroy network for sandbox \"9463f8ac9ee1c8135d36ece56c5d5fc667d530925fee15856bea0d76fd3ed91b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:41:22.179476 containerd[1894]: time="2026-01-13T23:41:22.179078498Z" level=error msg="Failed to destroy network for sandbox \"3fc9c5ca48bec5b75f2d4b8e27b69f2630c6ec551a31f6c94e9f9886a859f8d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:41:22.179476 containerd[1894]: time="2026-01-13T23:41:22.179428809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7569cdf946-2r7qk,Uid:bc4e2f94-7e3c-446b-9bce-55e8c6abc38d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1317dd6158396dfa367bba55eef601b7f277ff5de813b25334e45825be3d9fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:41:22.181843 kubelet[3516]: E0113 23:41:22.179724 3516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1317dd6158396dfa367bba55eef601b7f277ff5de813b25334e45825be3d9fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:41:22.181843 kubelet[3516]: E0113 23:41:22.179800 3516 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1317dd6158396dfa367bba55eef601b7f277ff5de813b25334e45825be3d9fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7569cdf946-2r7qk" Jan 13 23:41:22.181843 kubelet[3516]: E0113 23:41:22.179833 3516 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1317dd6158396dfa367bba55eef601b7f277ff5de813b25334e45825be3d9fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7569cdf946-2r7qk" Jan 13 23:41:22.182080 kubelet[3516]: E0113 23:41:22.181269 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7569cdf946-2r7qk_calico-apiserver(bc4e2f94-7e3c-446b-9bce-55e8c6abc38d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7569cdf946-2r7qk_calico-apiserver(bc4e2f94-7e3c-446b-9bce-55e8c6abc38d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1317dd6158396dfa367bba55eef601b7f277ff5de813b25334e45825be3d9fac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7569cdf946-2r7qk" podUID="bc4e2f94-7e3c-446b-9bce-55e8c6abc38d" Jan 13 23:41:22.191634 systemd[1]: run-netns-cni\x2daae3bf96\x2d02a2\x2d2b2a\x2df2df\x2dd009fc2049d6.mount: Deactivated successfully. Jan 13 23:41:22.196016 containerd[1894]: time="2026-01-13T23:41:22.195643945Z" level=error msg="Failed to destroy network for sandbox \"1d1611ada184f9ad58b84fbb7bd55671fbc35c9224770a9f9f543f59772e3154\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:41:22.203166 systemd[1]: run-netns-cni\x2d2652bb29\x2d42be\x2d0050\x2d211d\x2da45c4817b974.mount: Deactivated successfully. Jan 13 23:41:22.203455 systemd[1]: run-netns-cni\x2d8a0811ed\x2d2720\x2d1854\x2d47e1\x2dffde677f7913.mount: Deactivated successfully. Jan 13 23:41:22.207325 containerd[1894]: time="2026-01-13T23:41:22.206972927Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6488775c94-xpm9n,Uid:c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9463f8ac9ee1c8135d36ece56c5d5fc667d530925fee15856bea0d76fd3ed91b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:41:22.207708 kubelet[3516]: E0113 23:41:22.207658 3516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9463f8ac9ee1c8135d36ece56c5d5fc667d530925fee15856bea0d76fd3ed91b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:41:22.208039 kubelet[3516]: E0113 23:41:22.207884 3516 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9463f8ac9ee1c8135d36ece56c5d5fc667d530925fee15856bea0d76fd3ed91b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6488775c94-xpm9n" Jan 13 23:41:22.208039 kubelet[3516]: E0113 23:41:22.207978 3516 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9463f8ac9ee1c8135d36ece56c5d5fc667d530925fee15856bea0d76fd3ed91b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6488775c94-xpm9n" Jan 13 23:41:22.209539 containerd[1894]: time="2026-01-13T23:41:22.208330001Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dpd56,Uid:53d99ef2-7e93-4ffd-bfa2-64159cfed963,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d1611ada184f9ad58b84fbb7bd55671fbc35c9224770a9f9f543f59772e3154\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:41:22.209731 kubelet[3516]: E0113 23:41:22.208334 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6488775c94-xpm9n_calico-system(c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6488775c94-xpm9n_calico-system(c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9463f8ac9ee1c8135d36ece56c5d5fc667d530925fee15856bea0d76fd3ed91b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6488775c94-xpm9n" podUID="c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a" Jan 13 23:41:22.210205 kubelet[3516]: E0113 23:41:22.210044 3516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d1611ada184f9ad58b84fbb7bd55671fbc35c9224770a9f9f543f59772e3154\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:41:22.210205 kubelet[3516]: E0113 23:41:22.210124 3516 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d1611ada184f9ad58b84fbb7bd55671fbc35c9224770a9f9f543f59772e3154\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dpd56" Jan 13 23:41:22.210205 kubelet[3516]: E0113 23:41:22.210161 3516 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d1611ada184f9ad58b84fbb7bd55671fbc35c9224770a9f9f543f59772e3154\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dpd56" Jan 13 23:41:22.210523 kubelet[3516]: E0113 23:41:22.210232 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-dpd56_kube-system(53d99ef2-7e93-4ffd-bfa2-64159cfed963)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-dpd56_kube-system(53d99ef2-7e93-4ffd-bfa2-64159cfed963)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d1611ada184f9ad58b84fbb7bd55671fbc35c9224770a9f9f543f59772e3154\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-dpd56" podUID="53d99ef2-7e93-4ffd-bfa2-64159cfed963" Jan 13 23:41:22.213220 containerd[1894]: time="2026-01-13T23:41:22.213068961Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xqbbk,Uid:497e3e66-1726-45f4-a990-23061cc5868e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fc9c5ca48bec5b75f2d4b8e27b69f2630c6ec551a31f6c94e9f9886a859f8d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:41:22.213872 kubelet[3516]: E0113 23:41:22.213539 3516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fc9c5ca48bec5b75f2d4b8e27b69f2630c6ec551a31f6c94e9f9886a859f8d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:41:22.213872 kubelet[3516]: E0113 23:41:22.213616 3516 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fc9c5ca48bec5b75f2d4b8e27b69f2630c6ec551a31f6c94e9f9886a859f8d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-xqbbk" Jan 13 23:41:22.213872 kubelet[3516]: E0113 23:41:22.213651 3516 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fc9c5ca48bec5b75f2d4b8e27b69f2630c6ec551a31f6c94e9f9886a859f8d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-xqbbk" Jan 13 23:41:22.214318 kubelet[3516]: E0113 23:41:22.213738 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-xqbbk_calico-system(497e3e66-1726-45f4-a990-23061cc5868e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-xqbbk_calico-system(497e3e66-1726-45f4-a990-23061cc5868e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fc9c5ca48bec5b75f2d4b8e27b69f2630c6ec551a31f6c94e9f9886a859f8d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-xqbbk" podUID="497e3e66-1726-45f4-a990-23061cc5868e" Jan 13 23:41:22.231807 containerd[1894]: time="2026-01-13T23:41:22.231717905Z" level=error msg="Failed to destroy network for sandbox \"72001caa49bae93bd02ad96d6f17fad56b3c1a74435a0a8a07cb4aa131c929ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:41:22.234350 containerd[1894]: time="2026-01-13T23:41:22.234273130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7569cdf946-hmfxf,Uid:0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"72001caa49bae93bd02ad96d6f17fad56b3c1a74435a0a8a07cb4aa131c929ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:41:22.235262 kubelet[3516]: E0113 23:41:22.235179 3516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72001caa49bae93bd02ad96d6f17fad56b3c1a74435a0a8a07cb4aa131c929ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:41:22.235262 kubelet[3516]: E0113 23:41:22.235258 3516 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72001caa49bae93bd02ad96d6f17fad56b3c1a74435a0a8a07cb4aa131c929ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7569cdf946-hmfxf" Jan 13 23:41:22.235704 kubelet[3516]: E0113 23:41:22.235293 3516 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72001caa49bae93bd02ad96d6f17fad56b3c1a74435a0a8a07cb4aa131c929ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7569cdf946-hmfxf" Jan 13 23:41:22.235704 kubelet[3516]: E0113 23:41:22.235392 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7569cdf946-hmfxf_calico-apiserver(0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7569cdf946-hmfxf_calico-apiserver(0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72001caa49bae93bd02ad96d6f17fad56b3c1a74435a0a8a07cb4aa131c929ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7569cdf946-hmfxf" podUID="0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb" Jan 13 23:41:22.237056 containerd[1894]: time="2026-01-13T23:41:22.236277796Z" level=error msg="Failed to destroy network for sandbox \"9eb96f542d36c97d17f09769c4705aac65b981d962d06eaaa328d5645e73d65b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:41:22.240052 containerd[1894]: time="2026-01-13T23:41:22.239977901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mslr6,Uid:b87a7357-7c47-49ad-871f-3e101a102b84,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eb96f542d36c97d17f09769c4705aac65b981d962d06eaaa328d5645e73d65b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:41:22.240944 kubelet[3516]: E0113 23:41:22.240292 3516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eb96f542d36c97d17f09769c4705aac65b981d962d06eaaa328d5645e73d65b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:41:22.240944 kubelet[3516]: E0113 23:41:22.240366 3516 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eb96f542d36c97d17f09769c4705aac65b981d962d06eaaa328d5645e73d65b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-mslr6" Jan 13 23:41:22.240944 kubelet[3516]: E0113 23:41:22.240399 3516 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eb96f542d36c97d17f09769c4705aac65b981d962d06eaaa328d5645e73d65b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-mslr6" Jan 13 23:41:22.242451 kubelet[3516]: E0113 23:41:22.240495 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-mslr6_kube-system(b87a7357-7c47-49ad-871f-3e101a102b84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-mslr6_kube-system(b87a7357-7c47-49ad-871f-3e101a102b84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9eb96f542d36c97d17f09769c4705aac65b981d962d06eaaa328d5645e73d65b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-mslr6" podUID="b87a7357-7c47-49ad-871f-3e101a102b84" Jan 13 23:41:22.408481 systemd[1]: Created slice kubepods-besteffort-pode305c05b_4fdf_40a3_854a_8a106f493072.slice - libcontainer container kubepods-besteffort-pode305c05b_4fdf_40a3_854a_8a106f493072.slice. Jan 13 23:41:22.416882 containerd[1894]: time="2026-01-13T23:41:22.416512482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p84n5,Uid:e305c05b-4fdf-40a3-854a-8a106f493072,Namespace:calico-system,Attempt:0,}" Jan 13 23:41:22.513385 containerd[1894]: time="2026-01-13T23:41:22.513303282Z" level=error msg="Failed to destroy network for sandbox \"8f4498842c0bb08bcb4b573bc427aab39cd28074ff7eb8f8b5d1bc6580c57582\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:41:22.516111 containerd[1894]: time="2026-01-13T23:41:22.516004908Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p84n5,Uid:e305c05b-4fdf-40a3-854a-8a106f493072,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f4498842c0bb08bcb4b573bc427aab39cd28074ff7eb8f8b5d1bc6580c57582\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:41:22.516599 kubelet[3516]: E0113 23:41:22.516496 3516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f4498842c0bb08bcb4b573bc427aab39cd28074ff7eb8f8b5d1bc6580c57582\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:41:22.517014 kubelet[3516]: E0113 23:41:22.516751 3516 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f4498842c0bb08bcb4b573bc427aab39cd28074ff7eb8f8b5d1bc6580c57582\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p84n5" Jan 13 23:41:22.517014 kubelet[3516]: E0113 23:41:22.516796 3516 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f4498842c0bb08bcb4b573bc427aab39cd28074ff7eb8f8b5d1bc6580c57582\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p84n5" Jan 13 23:41:22.517242 kubelet[3516]: E0113 23:41:22.516877 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-p84n5_calico-system(e305c05b-4fdf-40a3-854a-8a106f493072)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-p84n5_calico-system(e305c05b-4fdf-40a3-854a-8a106f493072)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f4498842c0bb08bcb4b573bc427aab39cd28074ff7eb8f8b5d1bc6580c57582\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-p84n5" podUID="e305c05b-4fdf-40a3-854a-8a106f493072" Jan 13 23:41:22.989797 systemd[1]: run-netns-cni\x2d7c1d41e3\x2da400\x2dba35\x2d27e3\x2d738f3348e7a5.mount: Deactivated successfully. Jan 13 23:41:22.989998 systemd[1]: run-netns-cni\x2df5d3b14e\x2d7075\x2dcaf0\x2daefc\x2d4d20b91c09e7.mount: Deactivated successfully. Jan 13 23:41:28.990806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4275700435.mount: Deactivated successfully. Jan 13 23:41:29.065774 containerd[1894]: time="2026-01-13T23:41:29.065658411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:41:29.067878 containerd[1894]: time="2026-01-13T23:41:29.067509917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 13 23:41:29.070125 containerd[1894]: time="2026-01-13T23:41:29.070068840Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:41:29.074740 containerd[1894]: time="2026-01-13T23:41:29.074689061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:41:29.075757 containerd[1894]: time="2026-01-13T23:41:29.075698609Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 7.204657971s" Jan 13 23:41:29.075865 containerd[1894]: time="2026-01-13T23:41:29.075757739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 13 23:41:29.122940 containerd[1894]: time="2026-01-13T23:41:29.122731622Z" level=info msg="CreateContainer within sandbox \"3cca2d8145f5c3f3a8431c52562d4ed82b2428ef046dd27c5357e7106b1e72b7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 23:41:29.154099 containerd[1894]: time="2026-01-13T23:41:29.150601907Z" level=info msg="Container 72ca20012cd737bdaa2c12f2ec8e5df878d754c43eab6c5fb348425fabff034f: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:41:29.174332 containerd[1894]: time="2026-01-13T23:41:29.174147017Z" level=info msg="CreateContainer within sandbox \"3cca2d8145f5c3f3a8431c52562d4ed82b2428ef046dd27c5357e7106b1e72b7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"72ca20012cd737bdaa2c12f2ec8e5df878d754c43eab6c5fb348425fabff034f\"" Jan 13 23:41:29.176958 containerd[1894]: time="2026-01-13T23:41:29.175779785Z" level=info msg="StartContainer for \"72ca20012cd737bdaa2c12f2ec8e5df878d754c43eab6c5fb348425fabff034f\"" Jan 13 23:41:29.179301 containerd[1894]: time="2026-01-13T23:41:29.179252269Z" level=info msg="connecting to shim 72ca20012cd737bdaa2c12f2ec8e5df878d754c43eab6c5fb348425fabff034f" address="unix:///run/containerd/s/937641e1ba0f0117aa0e18b452b6da0c0c6a96f1a36a66dbcab68cdeb9486d29" protocol=ttrpc version=3 Jan 13 23:41:29.261299 systemd[1]: Started cri-containerd-72ca20012cd737bdaa2c12f2ec8e5df878d754c43eab6c5fb348425fabff034f.scope - libcontainer container 72ca20012cd737bdaa2c12f2ec8e5df878d754c43eab6c5fb348425fabff034f. Jan 13 23:41:29.342693 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 13 23:41:29.342868 kernel: audit: type=1334 audit(1768347689.339:585): prog-id=178 op=LOAD Jan 13 23:41:29.339000 audit: BPF prog-id=178 op=LOAD Jan 13 23:41:29.339000 audit[4516]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4026 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:29.349516 kernel: audit: type=1300 audit(1768347689.339:585): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4026 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:29.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732636132303031326364373337626461613263313266326563386535 Jan 13 23:41:29.355851 kernel: audit: type=1327 audit(1768347689.339:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732636132303031326364373337626461613263313266326563386535 Jan 13 23:41:29.357593 kernel: audit: type=1334 audit(1768347689.339:586): prog-id=179 op=LOAD Jan 13 23:41:29.339000 audit: BPF prog-id=179 op=LOAD Jan 13 23:41:29.339000 audit[4516]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4026 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:29.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732636132303031326364373337626461613263313266326563386535 Jan 13 23:41:29.370774 kernel: audit: type=1300 audit(1768347689.339:586): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4026 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:29.371265 kernel: audit: type=1327 audit(1768347689.339:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732636132303031326364373337626461613263313266326563386535 Jan 13 23:41:29.371663 kernel: audit: type=1334 audit(1768347689.341:587): prog-id=179 op=UNLOAD Jan 13 23:41:29.341000 audit: BPF prog-id=179 op=UNLOAD Jan 13 23:41:29.341000 audit[4516]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4026 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:29.379175 kernel: audit: type=1300 audit(1768347689.341:587): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4026 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:29.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732636132303031326364373337626461613263313266326563386535 Jan 13 23:41:29.385022 kernel: audit: type=1327 audit(1768347689.341:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732636132303031326364373337626461613263313266326563386535 Jan 13 23:41:29.341000 audit: BPF prog-id=178 op=UNLOAD Jan 13 23:41:29.391828 kernel: audit: type=1334 audit(1768347689.341:588): prog-id=178 op=UNLOAD Jan 13 23:41:29.341000 audit[4516]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4026 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:29.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732636132303031326364373337626461613263313266326563386535 Jan 13 23:41:29.341000 audit: BPF prog-id=180 op=LOAD Jan 13 23:41:29.341000 audit[4516]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4026 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:29.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732636132303031326364373337626461613263313266326563386535 Jan 13 23:41:29.431147 containerd[1894]: time="2026-01-13T23:41:29.431037792Z" level=info msg="StartContainer for \"72ca20012cd737bdaa2c12f2ec8e5df878d754c43eab6c5fb348425fabff034f\" returns successfully" Jan 13 23:41:29.737128 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 23:41:29.737463 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 23:41:30.063646 kubelet[3516]: I0113 23:41:30.063438 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hnqxs" podStartSLOduration=2.550394103 podStartE2EDuration="20.063415824s" podCreationTimestamp="2026-01-13 23:41:10 +0000 UTC" firstStartedPulling="2026-01-13 23:41:11.564337018 +0000 UTC m=+39.440363378" lastFinishedPulling="2026-01-13 23:41:29.077358739 +0000 UTC m=+56.953385099" observedRunningTime="2026-01-13 23:41:30.004289431 +0000 UTC m=+57.880315815" watchObservedRunningTime="2026-01-13 23:41:30.063415824 +0000 UTC m=+57.939442196" Jan 13 23:41:30.111798 kubelet[3516]: I0113 23:41:30.111740 3516 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ttns\" (UniqueName: \"kubernetes.io/projected/b24cd04e-6152-4ecd-b62b-7b335c0d5f20-kube-api-access-5ttns\") pod \"b24cd04e-6152-4ecd-b62b-7b335c0d5f20\" (UID: \"b24cd04e-6152-4ecd-b62b-7b335c0d5f20\") " Jan 13 23:41:30.112072 kubelet[3516]: I0113 23:41:30.111822 3516 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b24cd04e-6152-4ecd-b62b-7b335c0d5f20-whisker-backend-key-pair\") pod \"b24cd04e-6152-4ecd-b62b-7b335c0d5f20\" (UID: \"b24cd04e-6152-4ecd-b62b-7b335c0d5f20\") " Jan 13 23:41:30.112072 kubelet[3516]: I0113 23:41:30.111877 3516 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b24cd04e-6152-4ecd-b62b-7b335c0d5f20-whisker-ca-bundle\") pod \"b24cd04e-6152-4ecd-b62b-7b335c0d5f20\" (UID: \"b24cd04e-6152-4ecd-b62b-7b335c0d5f20\") " Jan 13 23:41:30.114882 kubelet[3516]: I0113 23:41:30.114802 3516 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b24cd04e-6152-4ecd-b62b-7b335c0d5f20-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b24cd04e-6152-4ecd-b62b-7b335c0d5f20" (UID: "b24cd04e-6152-4ecd-b62b-7b335c0d5f20"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 13 23:41:30.124301 kubelet[3516]: I0113 23:41:30.124228 3516 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b24cd04e-6152-4ecd-b62b-7b335c0d5f20-kube-api-access-5ttns" (OuterVolumeSpecName: "kube-api-access-5ttns") pod "b24cd04e-6152-4ecd-b62b-7b335c0d5f20" (UID: "b24cd04e-6152-4ecd-b62b-7b335c0d5f20"). InnerVolumeSpecName "kube-api-access-5ttns". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 13 23:41:30.133328 systemd[1]: var-lib-kubelet-pods-b24cd04e\x2d6152\x2d4ecd\x2db62b\x2d7b335c0d5f20-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5ttns.mount: Deactivated successfully. Jan 13 23:41:30.133643 systemd[1]: var-lib-kubelet-pods-b24cd04e\x2d6152\x2d4ecd\x2db62b\x2d7b335c0d5f20-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 13 23:41:30.143060 kubelet[3516]: I0113 23:41:30.142989 3516 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b24cd04e-6152-4ecd-b62b-7b335c0d5f20-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b24cd04e-6152-4ecd-b62b-7b335c0d5f20" (UID: "b24cd04e-6152-4ecd-b62b-7b335c0d5f20"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 13 23:41:30.217262 kubelet[3516]: I0113 23:41:30.217198 3516 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5ttns\" (UniqueName: \"kubernetes.io/projected/b24cd04e-6152-4ecd-b62b-7b335c0d5f20-kube-api-access-5ttns\") on node \"ip-172-31-24-127\" DevicePath \"\"" Jan 13 23:41:30.217262 kubelet[3516]: I0113 23:41:30.217260 3516 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b24cd04e-6152-4ecd-b62b-7b335c0d5f20-whisker-backend-key-pair\") on node \"ip-172-31-24-127\" DevicePath \"\"" Jan 13 23:41:30.217262 kubelet[3516]: I0113 23:41:30.217287 3516 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b24cd04e-6152-4ecd-b62b-7b335c0d5f20-whisker-ca-bundle\") on node \"ip-172-31-24-127\" DevicePath \"\"" Jan 13 23:41:30.418252 systemd[1]: Removed slice kubepods-besteffort-podb24cd04e_6152_4ecd_b62b_7b335c0d5f20.slice - libcontainer container kubepods-besteffort-podb24cd04e_6152_4ecd_b62b_7b335c0d5f20.slice. Jan 13 23:41:31.084228 systemd[1]: Created slice kubepods-besteffort-pod999f0f43_0933_4443_a84f_03be4dcf7cf6.slice - libcontainer container kubepods-besteffort-pod999f0f43_0933_4443_a84f_03be4dcf7cf6.slice. Jan 13 23:41:31.124128 kubelet[3516]: I0113 23:41:31.124036 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmnwc\" (UniqueName: \"kubernetes.io/projected/999f0f43-0933-4443-a84f-03be4dcf7cf6-kube-api-access-zmnwc\") pod \"whisker-d477b98f6-wrppl\" (UID: \"999f0f43-0933-4443-a84f-03be4dcf7cf6\") " pod="calico-system/whisker-d477b98f6-wrppl" Jan 13 23:41:31.124128 kubelet[3516]: I0113 23:41:31.124123 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/999f0f43-0933-4443-a84f-03be4dcf7cf6-whisker-backend-key-pair\") pod \"whisker-d477b98f6-wrppl\" (UID: \"999f0f43-0933-4443-a84f-03be4dcf7cf6\") " pod="calico-system/whisker-d477b98f6-wrppl" Jan 13 23:41:31.125740 kubelet[3516]: I0113 23:41:31.124176 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/999f0f43-0933-4443-a84f-03be4dcf7cf6-whisker-ca-bundle\") pod \"whisker-d477b98f6-wrppl\" (UID: \"999f0f43-0933-4443-a84f-03be4dcf7cf6\") " pod="calico-system/whisker-d477b98f6-wrppl" Jan 13 23:41:31.395416 containerd[1894]: time="2026-01-13T23:41:31.395267370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d477b98f6-wrppl,Uid:999f0f43-0933-4443-a84f-03be4dcf7cf6,Namespace:calico-system,Attempt:0,}" Jan 13 23:41:32.360000 audit: BPF prog-id=181 op=LOAD Jan 13 23:41:32.360000 audit[4763]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd7e05bf8 a2=98 a3=ffffd7e05be8 items=0 ppid=4650 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.360000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 13 23:41:32.361000 audit: BPF prog-id=181 op=UNLOAD Jan 13 23:41:32.361000 audit[4763]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd7e05bc8 a3=0 items=0 ppid=4650 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.361000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 13 23:41:32.361000 audit: BPF prog-id=182 op=LOAD Jan 13 23:41:32.361000 audit[4763]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd7e05aa8 a2=74 a3=95 items=0 ppid=4650 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.361000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 13 23:41:32.361000 audit: BPF prog-id=182 op=UNLOAD Jan 13 23:41:32.361000 audit[4763]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4650 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.361000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 13 23:41:32.361000 audit: BPF prog-id=183 op=LOAD Jan 13 23:41:32.361000 audit[4763]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd7e05ad8 a2=40 a3=ffffd7e05b08 items=0 ppid=4650 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.361000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 13 23:41:32.361000 audit: BPF prog-id=183 op=UNLOAD Jan 13 23:41:32.361000 audit[4763]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffd7e05b08 items=0 ppid=4650 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.361000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 13 23:41:32.364000 audit: BPF prog-id=184 op=LOAD Jan 13 23:41:32.364000 audit[4764]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffda61b1e8 a2=98 a3=ffffda61b1d8 items=0 ppid=4650 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.364000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:41:32.364000 audit: BPF prog-id=184 op=UNLOAD Jan 13 23:41:32.364000 audit[4764]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffda61b1b8 a3=0 items=0 ppid=4650 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.364000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:41:32.365000 audit: BPF prog-id=185 op=LOAD Jan 13 23:41:32.365000 audit[4764]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffda61ae78 a2=74 a3=95 items=0 ppid=4650 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.365000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:41:32.366000 audit: BPF prog-id=185 op=UNLOAD Jan 13 23:41:32.366000 audit[4764]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4650 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.366000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:41:32.366000 audit: BPF prog-id=186 op=LOAD Jan 13 23:41:32.366000 audit[4764]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffda61aed8 a2=94 a3=2 items=0 ppid=4650 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.366000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:41:32.366000 audit: BPF prog-id=186 op=UNLOAD Jan 13 23:41:32.366000 audit[4764]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4650 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.366000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:41:32.413124 kubelet[3516]: I0113 23:41:32.412999 3516 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b24cd04e-6152-4ecd-b62b-7b335c0d5f20" path="/var/lib/kubelet/pods/b24cd04e-6152-4ecd-b62b-7b335c0d5f20/volumes" Jan 13 23:41:32.594000 audit: BPF prog-id=187 op=LOAD Jan 13 23:41:32.594000 audit[4764]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffda61ae98 a2=40 a3=ffffda61aec8 items=0 ppid=4650 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.594000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:41:32.594000 audit: BPF prog-id=187 op=UNLOAD Jan 13 23:41:32.594000 audit[4764]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffda61aec8 items=0 ppid=4650 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.594000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:41:32.613000 audit: BPF prog-id=188 op=LOAD Jan 13 23:41:32.613000 audit[4764]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffda61aea8 a2=94 a3=4 items=0 ppid=4650 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.613000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:41:32.613000 audit: BPF prog-id=188 op=UNLOAD Jan 13 23:41:32.613000 audit[4764]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4650 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.613000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:41:32.614000 audit: BPF prog-id=189 op=LOAD Jan 13 23:41:32.614000 audit[4764]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffda61ace8 a2=94 a3=5 items=0 ppid=4650 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.614000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:41:32.615000 audit: BPF prog-id=189 op=UNLOAD Jan 13 23:41:32.615000 audit[4764]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4650 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.615000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:41:32.615000 audit: BPF prog-id=190 op=LOAD Jan 13 23:41:32.615000 audit[4764]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffda61af18 a2=94 a3=6 items=0 ppid=4650 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.615000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:41:32.615000 audit: BPF prog-id=190 op=UNLOAD Jan 13 23:41:32.615000 audit[4764]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4650 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.615000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:41:32.615000 audit: BPF prog-id=191 op=LOAD Jan 13 23:41:32.615000 audit[4764]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffda61a6e8 a2=94 a3=83 items=0 ppid=4650 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.615000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:41:32.616000 audit: BPF prog-id=192 op=LOAD Jan 13 23:41:32.616000 audit[4764]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffda61a4a8 a2=94 a3=2 items=0 ppid=4650 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.616000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:41:32.616000 audit: BPF prog-id=192 op=UNLOAD Jan 13 23:41:32.616000 audit[4764]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4650 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.616000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:41:32.618000 audit: BPF prog-id=191 op=UNLOAD Jan 13 23:41:32.618000 audit[4764]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=e6e6620 a3=e6d9b00 items=0 ppid=4650 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.618000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:41:32.649000 audit: BPF prog-id=193 op=LOAD Jan 13 23:41:32.649000 audit[4777]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffebb229d8 a2=98 a3=ffffebb229c8 items=0 ppid=4650 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.649000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 13 23:41:32.649000 audit: BPF prog-id=193 op=UNLOAD Jan 13 23:41:32.649000 audit[4777]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffebb229a8 a3=0 items=0 ppid=4650 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.649000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 13 23:41:32.650000 audit: BPF prog-id=194 op=LOAD Jan 13 23:41:32.650000 audit[4777]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffebb22888 a2=74 a3=95 items=0 ppid=4650 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.650000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 13 23:41:32.650000 audit: BPF prog-id=194 op=UNLOAD Jan 13 23:41:32.650000 audit[4777]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4650 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.650000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 13 23:41:32.651000 audit: BPF prog-id=195 op=LOAD Jan 13 23:41:32.651000 audit[4777]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffebb228b8 a2=40 a3=ffffebb228e8 items=0 ppid=4650 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.651000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 13 23:41:32.651000 audit: BPF prog-id=195 op=UNLOAD Jan 13 23:41:32.651000 audit[4777]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffebb228e8 items=0 ppid=4650 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:32.651000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 13 23:41:32.916508 systemd-networkd[1617]: vxlan.calico: Link UP Jan 13 23:41:32.917154 systemd-networkd[1617]: vxlan.calico: Gained carrier Jan 13 23:41:32.938534 (udev-worker)[4553]: Network interface NamePolicy= disabled on kernel command line. Jan 13 23:41:32.973782 (udev-worker)[4552]: Network interface NamePolicy= disabled on kernel command line. Jan 13 23:41:32.982697 systemd-networkd[1617]: calif66d11f0612: Link UP Jan 13 23:41:32.983275 systemd-networkd[1617]: calif66d11f0612: Gained carrier Jan 13 23:41:33.011000 audit: BPF prog-id=196 op=LOAD Jan 13 23:41:33.011000 audit[4806]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe3b44248 a2=98 a3=ffffe3b44238 items=0 ppid=4650 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.011000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:41:33.012000 audit: BPF prog-id=196 op=UNLOAD Jan 13 23:41:33.012000 audit[4806]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe3b44218 a3=0 items=0 ppid=4650 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.012000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:41:33.013000 audit: BPF prog-id=197 op=LOAD Jan 13 23:41:33.014714 (udev-worker)[4551]: Network interface NamePolicy= disabled on kernel command line. Jan 13 23:41:33.013000 audit[4806]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe3b43f28 a2=74 a3=95 items=0 ppid=4650 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:41:33.014000 audit: BPF prog-id=197 op=UNLOAD Jan 13 23:41:33.014000 audit[4806]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4650 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.014000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:41:33.014000 audit: BPF prog-id=198 op=LOAD Jan 13 23:41:33.014000 audit[4806]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe3b43f88 a2=94 a3=2 items=0 ppid=4650 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.014000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:41:33.014000 audit: BPF prog-id=198 op=UNLOAD Jan 13 23:41:33.014000 audit[4806]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4650 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.014000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:41:33.014000 audit: BPF prog-id=199 op=LOAD Jan 13 23:41:33.014000 audit[4806]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe3b43e08 a2=40 a3=ffffe3b43e38 items=0 ppid=4650 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.014000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:41:33.016000 audit: BPF prog-id=199 op=UNLOAD Jan 13 23:41:33.016000 audit[4806]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffe3b43e38 items=0 ppid=4650 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.016000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:41:33.016000 audit: BPF prog-id=200 op=LOAD Jan 13 23:41:33.016000 audit[4806]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe3b43f58 a2=94 a3=b7 items=0 ppid=4650 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.016000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:41:33.016000 audit: BPF prog-id=200 op=UNLOAD Jan 13 23:41:33.016000 audit[4806]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4650 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.016000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:41:33.019000 audit: BPF prog-id=201 op=LOAD Jan 13 23:41:33.019000 audit[4806]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe3b43608 a2=94 a3=2 items=0 ppid=4650 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.019000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:41:33.020000 audit: BPF prog-id=201 op=UNLOAD Jan 13 23:41:33.020000 audit[4806]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4650 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.020000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:41:33.020000 audit: BPF prog-id=202 op=LOAD Jan 13 23:41:33.020000 audit[4806]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe3b43798 a2=94 a3=30 items=0 ppid=4650 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.020000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:41:33.040000 audit: BPF prog-id=203 op=LOAD Jan 13 23:41:33.040000 audit[4811]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe6c74d58 a2=98 a3=ffffe6c74d48 items=0 ppid=4650 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.040000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:41:33.042000 audit: BPF prog-id=203 op=UNLOAD Jan 13 23:41:33.042000 audit[4811]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe6c74d28 a3=0 items=0 ppid=4650 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.042000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:41:33.047000 audit: BPF prog-id=204 op=LOAD Jan 13 23:41:33.047000 audit[4811]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe6c749e8 a2=74 a3=95 items=0 ppid=4650 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.047000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:41:33.047000 audit: BPF prog-id=204 op=UNLOAD Jan 13 23:41:33.047000 audit[4811]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4650 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.047000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:41:33.048000 audit: BPF prog-id=205 op=LOAD Jan 13 23:41:33.048000 audit[4811]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe6c74a48 a2=94 a3=2 items=0 ppid=4650 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.048000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:41:33.048000 audit: BPF prog-id=205 op=UNLOAD Jan 13 23:41:33.048000 audit[4811]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4650 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.048000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:41:33.128427 containerd[1894]: 2026-01-13 23:41:31.495 [INFO][4630] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 23:41:33.128427 containerd[1894]: 2026-01-13 23:41:32.548 [INFO][4630] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--127-k8s-whisker--d477b98f6--wrppl-eth0 whisker-d477b98f6- calico-system 999f0f43-0933-4443-a84f-03be4dcf7cf6 972 0 2026-01-13 23:41:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:d477b98f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-24-127 whisker-d477b98f6-wrppl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif66d11f0612 [] [] }} ContainerID="5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d" Namespace="calico-system" Pod="whisker-d477b98f6-wrppl" WorkloadEndpoint="ip--172--31--24--127-k8s-whisker--d477b98f6--wrppl-" Jan 13 23:41:33.128427 containerd[1894]: 2026-01-13 23:41:32.548 [INFO][4630] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d" Namespace="calico-system" Pod="whisker-d477b98f6-wrppl" WorkloadEndpoint="ip--172--31--24--127-k8s-whisker--d477b98f6--wrppl-eth0" Jan 13 23:41:33.128427 containerd[1894]: 2026-01-13 23:41:32.699 [INFO][4768] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d" HandleID="k8s-pod-network.5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d" Workload="ip--172--31--24--127-k8s-whisker--d477b98f6--wrppl-eth0" Jan 13 23:41:33.129463 containerd[1894]: 2026-01-13 23:41:32.700 [INFO][4768] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d" HandleID="k8s-pod-network.5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d" Workload="ip--172--31--24--127-k8s-whisker--d477b98f6--wrppl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ddd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-127", "pod":"whisker-d477b98f6-wrppl", "timestamp":"2026-01-13 23:41:32.69968128 +0000 UTC"}, Hostname:"ip-172-31-24-127", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:41:33.129463 containerd[1894]: 2026-01-13 23:41:32.700 [INFO][4768] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:41:33.129463 containerd[1894]: 2026-01-13 23:41:32.700 [INFO][4768] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:41:33.129463 containerd[1894]: 2026-01-13 23:41:32.701 [INFO][4768] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-127' Jan 13 23:41:33.129463 containerd[1894]: 2026-01-13 23:41:32.734 [INFO][4768] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d" host="ip-172-31-24-127" Jan 13 23:41:33.129463 containerd[1894]: 2026-01-13 23:41:32.756 [INFO][4768] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-127" Jan 13 23:41:33.129463 containerd[1894]: 2026-01-13 23:41:32.778 [INFO][4768] ipam/ipam.go 511: Trying affinity for 192.168.115.0/26 host="ip-172-31-24-127" Jan 13 23:41:33.129463 containerd[1894]: 2026-01-13 23:41:32.797 [INFO][4768] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.0/26 host="ip-172-31-24-127" Jan 13 23:41:33.129463 containerd[1894]: 2026-01-13 23:41:32.814 [INFO][4768] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.0/26 host="ip-172-31-24-127" Jan 13 23:41:33.130855 containerd[1894]: 2026-01-13 23:41:32.814 [INFO][4768] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.0/26 handle="k8s-pod-network.5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d" host="ip-172-31-24-127" Jan 13 23:41:33.130855 containerd[1894]: 2026-01-13 23:41:32.823 [INFO][4768] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d Jan 13 23:41:33.130855 containerd[1894]: 2026-01-13 23:41:32.844 [INFO][4768] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.0/26 handle="k8s-pod-network.5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d" host="ip-172-31-24-127" Jan 13 23:41:33.130855 containerd[1894]: 2026-01-13 23:41:32.869 [INFO][4768] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.1/26] block=192.168.115.0/26 handle="k8s-pod-network.5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d" host="ip-172-31-24-127" Jan 13 23:41:33.130855 containerd[1894]: 2026-01-13 23:41:32.869 [INFO][4768] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.1/26] handle="k8s-pod-network.5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d" host="ip-172-31-24-127" Jan 13 23:41:33.130855 containerd[1894]: 2026-01-13 23:41:32.869 [INFO][4768] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:41:33.130855 containerd[1894]: 2026-01-13 23:41:32.870 [INFO][4768] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.1/26] IPv6=[] ContainerID="5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d" HandleID="k8s-pod-network.5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d" Workload="ip--172--31--24--127-k8s-whisker--d477b98f6--wrppl-eth0" Jan 13 23:41:33.131542 containerd[1894]: 2026-01-13 23:41:32.888 [INFO][4630] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d" Namespace="calico-system" Pod="whisker-d477b98f6-wrppl" WorkloadEndpoint="ip--172--31--24--127-k8s-whisker--d477b98f6--wrppl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--127-k8s-whisker--d477b98f6--wrppl-eth0", GenerateName:"whisker-d477b98f6-", Namespace:"calico-system", SelfLink:"", UID:"999f0f43-0933-4443-a84f-03be4dcf7cf6", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 41, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d477b98f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-127", ContainerID:"", Pod:"whisker-d477b98f6-wrppl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.115.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif66d11f0612", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:41:33.131542 containerd[1894]: 2026-01-13 23:41:32.889 [INFO][4630] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.1/32] ContainerID="5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d" Namespace="calico-system" Pod="whisker-d477b98f6-wrppl" WorkloadEndpoint="ip--172--31--24--127-k8s-whisker--d477b98f6--wrppl-eth0" Jan 13 23:41:33.132086 containerd[1894]: 2026-01-13 23:41:32.890 [INFO][4630] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif66d11f0612 ContainerID="5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d" Namespace="calico-system" Pod="whisker-d477b98f6-wrppl" WorkloadEndpoint="ip--172--31--24--127-k8s-whisker--d477b98f6--wrppl-eth0" Jan 13 23:41:33.132086 containerd[1894]: 2026-01-13 23:41:33.004 [INFO][4630] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d" Namespace="calico-system" Pod="whisker-d477b98f6-wrppl" WorkloadEndpoint="ip--172--31--24--127-k8s-whisker--d477b98f6--wrppl-eth0" Jan 13 23:41:33.132206 containerd[1894]: 2026-01-13 23:41:33.008 [INFO][4630] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d" Namespace="calico-system" Pod="whisker-d477b98f6-wrppl" WorkloadEndpoint="ip--172--31--24--127-k8s-whisker--d477b98f6--wrppl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--127-k8s-whisker--d477b98f6--wrppl-eth0", GenerateName:"whisker-d477b98f6-", Namespace:"calico-system", SelfLink:"", UID:"999f0f43-0933-4443-a84f-03be4dcf7cf6", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 41, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d477b98f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-127", ContainerID:"5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d", Pod:"whisker-d477b98f6-wrppl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.115.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif66d11f0612", MAC:"62:3b:d7:43:05:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:41:33.132342 containerd[1894]: 2026-01-13 23:41:33.118 [INFO][4630] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d" Namespace="calico-system" Pod="whisker-d477b98f6-wrppl" WorkloadEndpoint="ip--172--31--24--127-k8s-whisker--d477b98f6--wrppl-eth0" Jan 13 23:41:33.231178 containerd[1894]: time="2026-01-13T23:41:33.231108565Z" level=info msg="connecting to shim 5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d" address="unix:///run/containerd/s/e8b30763cd53e0ad21680c262352b8a5f1ecc947393d44cd6dd9eab1ad4cc017" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:41:33.301309 systemd[1]: Started cri-containerd-5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d.scope - libcontainer container 5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d. Jan 13 23:41:33.331000 audit: BPF prog-id=206 op=LOAD Jan 13 23:41:33.332000 audit: BPF prog-id=207 op=LOAD Jan 13 23:41:33.332000 audit[4839]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4824 pid=4839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563353034336566636539663736646662383732666564636461333934 Jan 13 23:41:33.333000 audit: BPF prog-id=207 op=UNLOAD Jan 13 23:41:33.333000 audit[4839]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4824 pid=4839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563353034336566636539663736646662383732666564636461333934 Jan 13 23:41:33.333000 audit: BPF prog-id=208 op=LOAD Jan 13 23:41:33.333000 audit[4839]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4824 pid=4839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563353034336566636539663736646662383732666564636461333934 Jan 13 23:41:33.334000 audit: BPF prog-id=209 op=LOAD Jan 13 23:41:33.334000 audit[4839]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4824 pid=4839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563353034336566636539663736646662383732666564636461333934 Jan 13 23:41:33.335000 audit: BPF prog-id=209 op=UNLOAD Jan 13 23:41:33.335000 audit[4839]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4824 pid=4839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563353034336566636539663736646662383732666564636461333934 Jan 13 23:41:33.335000 audit: BPF prog-id=208 op=UNLOAD Jan 13 23:41:33.335000 audit[4839]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4824 pid=4839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563353034336566636539663736646662383732666564636461333934 Jan 13 23:41:33.335000 audit: BPF prog-id=210 op=LOAD Jan 13 23:41:33.335000 audit[4839]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4824 pid=4839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563353034336566636539663736646662383732666564636461333934 Jan 13 23:41:33.338000 audit: BPF prog-id=211 op=LOAD Jan 13 23:41:33.338000 audit[4811]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe6c74a08 a2=40 a3=ffffe6c74a38 items=0 ppid=4650 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.338000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:41:33.338000 audit: BPF prog-id=211 op=UNLOAD Jan 13 23:41:33.338000 audit[4811]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffe6c74a38 items=0 ppid=4650 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.338000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:41:33.358000 audit: BPF prog-id=212 op=LOAD Jan 13 23:41:33.358000 audit[4811]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe6c74a18 a2=94 a3=4 items=0 ppid=4650 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.358000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:41:33.358000 audit: BPF prog-id=212 op=UNLOAD Jan 13 23:41:33.358000 audit[4811]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4650 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.358000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:41:33.358000 audit: BPF prog-id=213 op=LOAD Jan 13 23:41:33.358000 audit[4811]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe6c74858 a2=94 a3=5 items=0 ppid=4650 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.358000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:41:33.359000 audit: BPF prog-id=213 op=UNLOAD Jan 13 23:41:33.359000 audit[4811]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4650 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.359000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:41:33.359000 audit: BPF prog-id=214 op=LOAD Jan 13 23:41:33.359000 audit[4811]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe6c74a88 a2=94 a3=6 items=0 ppid=4650 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.359000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:41:33.359000 audit: BPF prog-id=214 op=UNLOAD Jan 13 23:41:33.359000 audit[4811]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4650 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.359000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:41:33.359000 audit: BPF prog-id=215 op=LOAD Jan 13 23:41:33.359000 audit[4811]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe6c74258 a2=94 a3=83 items=0 ppid=4650 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.359000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:41:33.360000 audit: BPF prog-id=216 op=LOAD Jan 13 23:41:33.360000 audit[4811]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffe6c74018 a2=94 a3=2 items=0 ppid=4650 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.360000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:41:33.360000 audit: BPF prog-id=216 op=UNLOAD Jan 13 23:41:33.360000 audit[4811]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4650 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.360000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:41:33.361000 audit: BPF prog-id=215 op=UNLOAD Jan 13 23:41:33.361000 audit[4811]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=9c7b620 a3=9c6eb00 items=0 ppid=4650 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.361000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:41:33.371000 audit: BPF prog-id=202 op=UNLOAD Jan 13 23:41:33.371000 audit[4650]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000c7c2c0 a2=0 a3=0 items=0 ppid=4641 pid=4650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.371000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 13 23:41:33.404372 containerd[1894]: time="2026-01-13T23:41:33.403989311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6488775c94-xpm9n,Uid:c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a,Namespace:calico-system,Attempt:0,}" Jan 13 23:41:33.413795 containerd[1894]: time="2026-01-13T23:41:33.413497727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dpd56,Uid:53d99ef2-7e93-4ffd-bfa2-64159cfed963,Namespace:kube-system,Attempt:0,}" Jan 13 23:41:33.616841 containerd[1894]: time="2026-01-13T23:41:33.616628191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d477b98f6-wrppl,Uid:999f0f43-0933-4443-a84f-03be4dcf7cf6,Namespace:calico-system,Attempt:0,} returns sandbox id \"5c5043efce9f76dfb872fedcda3941b4ebde10f68c378bdd64508443b66d0e6d\"" Jan 13 23:41:33.633300 containerd[1894]: time="2026-01-13T23:41:33.632614132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 13 23:41:33.796000 audit[4915]: NETFILTER_CFG table=raw:121 family=2 entries=21 op=nft_register_chain pid=4915 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:41:33.796000 audit[4915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffc402bdc0 a2=0 a3=ffff81492fa8 items=0 ppid=4650 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.796000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:41:33.826000 audit[4925]: NETFILTER_CFG table=nat:122 family=2 entries=15 op=nft_register_chain pid=4925 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:41:33.826000 audit[4925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffd47885a0 a2=0 a3=ffff80f11fa8 items=0 ppid=4650 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.826000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:41:33.854000 audit[4927]: NETFILTER_CFG table=mangle:123 family=2 entries=16 op=nft_register_chain pid=4927 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:41:33.854000 audit[4927]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffd1f5c550 a2=0 a3=ffff8abc4fa8 items=0 ppid=4650 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.854000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:41:33.879000 audit[4930]: NETFILTER_CFG table=filter:124 family=2 entries=39 op=nft_register_chain pid=4930 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:41:33.879000 audit[4930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18968 a0=3 a1=ffffec102180 a2=0 a3=ffff812e0fa8 items=0 ppid=4650 pid=4930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:33.879000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:41:33.958824 containerd[1894]: time="2026-01-13T23:41:33.958660428Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:41:33.970661 containerd[1894]: time="2026-01-13T23:41:33.969050000Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 13 23:41:33.970661 containerd[1894]: time="2026-01-13T23:41:33.969096163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 13 23:41:33.970879 kubelet[3516]: E0113 23:41:33.970057 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:41:33.970879 kubelet[3516]: E0113 23:41:33.970142 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:41:33.970879 kubelet[3516]: E0113 23:41:33.970276 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-d477b98f6-wrppl_calico-system(999f0f43-0933-4443-a84f-03be4dcf7cf6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 13 23:41:33.974520 containerd[1894]: time="2026-01-13T23:41:33.974473350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 13 23:41:34.013976 (udev-worker)[4812]: Network interface NamePolicy= disabled on kernel command line. Jan 13 23:41:34.020626 systemd-networkd[1617]: calia8ad2c80264: Link UP Jan 13 23:41:34.026146 systemd-networkd[1617]: calia8ad2c80264: Gained carrier Jan 13 23:41:34.002000 audit[4937]: NETFILTER_CFG table=filter:125 family=2 entries=59 op=nft_register_chain pid=4937 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:41:34.002000 audit[4937]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=35860 a0=3 a1=ffffee181af0 a2=0 a3=ffff8169afa8 items=0 ppid=4650 pid=4937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.002000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:41:34.073677 containerd[1894]: 2026-01-13 23:41:33.673 [INFO][4864] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--127-k8s-calico--kube--controllers--6488775c94--xpm9n-eth0 calico-kube-controllers-6488775c94- calico-system c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a 898 0 2026-01-13 23:41:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6488775c94 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-24-127 calico-kube-controllers-6488775c94-xpm9n eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia8ad2c80264 [] [] }} ContainerID="d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4" Namespace="calico-system" Pod="calico-kube-controllers-6488775c94-xpm9n" WorkloadEndpoint="ip--172--31--24--127-k8s-calico--kube--controllers--6488775c94--xpm9n-" Jan 13 23:41:34.073677 containerd[1894]: 2026-01-13 23:41:33.673 [INFO][4864] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4" Namespace="calico-system" Pod="calico-kube-controllers-6488775c94-xpm9n" WorkloadEndpoint="ip--172--31--24--127-k8s-calico--kube--controllers--6488775c94--xpm9n-eth0" Jan 13 23:41:34.073677 containerd[1894]: 2026-01-13 23:41:33.855 [INFO][4910] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4" HandleID="k8s-pod-network.d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4" Workload="ip--172--31--24--127-k8s-calico--kube--controllers--6488775c94--xpm9n-eth0" Jan 13 23:41:34.075151 containerd[1894]: 2026-01-13 23:41:33.857 [INFO][4910] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4" HandleID="k8s-pod-network.d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4" Workload="ip--172--31--24--127-k8s-calico--kube--controllers--6488775c94--xpm9n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103c10), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-127", "pod":"calico-kube-controllers-6488775c94-xpm9n", "timestamp":"2026-01-13 23:41:33.855733253 +0000 UTC"}, Hostname:"ip-172-31-24-127", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:41:34.075151 containerd[1894]: 2026-01-13 23:41:33.857 [INFO][4910] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:41:34.075151 containerd[1894]: 2026-01-13 23:41:33.857 [INFO][4910] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:41:34.075151 containerd[1894]: 2026-01-13 23:41:33.857 [INFO][4910] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-127' Jan 13 23:41:34.075151 containerd[1894]: 2026-01-13 23:41:33.886 [INFO][4910] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4" host="ip-172-31-24-127" Jan 13 23:41:34.075151 containerd[1894]: 2026-01-13 23:41:33.905 [INFO][4910] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-127" Jan 13 23:41:34.075151 containerd[1894]: 2026-01-13 23:41:33.936 [INFO][4910] ipam/ipam.go 511: Trying affinity for 192.168.115.0/26 host="ip-172-31-24-127" Jan 13 23:41:34.075151 containerd[1894]: 2026-01-13 23:41:33.948 [INFO][4910] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.0/26 host="ip-172-31-24-127" Jan 13 23:41:34.075151 containerd[1894]: 2026-01-13 23:41:33.956 [INFO][4910] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.0/26 host="ip-172-31-24-127" Jan 13 23:41:34.075629 containerd[1894]: 2026-01-13 23:41:33.957 [INFO][4910] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.0/26 handle="k8s-pod-network.d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4" host="ip-172-31-24-127" Jan 13 23:41:34.075629 containerd[1894]: 2026-01-13 23:41:33.962 [INFO][4910] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4 Jan 13 23:41:34.075629 containerd[1894]: 2026-01-13 23:41:33.980 [INFO][4910] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.0/26 handle="k8s-pod-network.d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4" host="ip-172-31-24-127" Jan 13 23:41:34.075629 containerd[1894]: 2026-01-13 23:41:34.004 [INFO][4910] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.2/26] block=192.168.115.0/26 handle="k8s-pod-network.d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4" host="ip-172-31-24-127" Jan 13 23:41:34.075629 containerd[1894]: 2026-01-13 23:41:34.004 [INFO][4910] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.2/26] handle="k8s-pod-network.d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4" host="ip-172-31-24-127" Jan 13 23:41:34.075629 containerd[1894]: 2026-01-13 23:41:34.004 [INFO][4910] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:41:34.075629 containerd[1894]: 2026-01-13 23:41:34.004 [INFO][4910] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.2/26] IPv6=[] ContainerID="d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4" HandleID="k8s-pod-network.d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4" Workload="ip--172--31--24--127-k8s-calico--kube--controllers--6488775c94--xpm9n-eth0" Jan 13 23:41:34.077461 containerd[1894]: 2026-01-13 23:41:34.010 [INFO][4864] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4" Namespace="calico-system" Pod="calico-kube-controllers-6488775c94-xpm9n" WorkloadEndpoint="ip--172--31--24--127-k8s-calico--kube--controllers--6488775c94--xpm9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--127-k8s-calico--kube--controllers--6488775c94--xpm9n-eth0", GenerateName:"calico-kube-controllers-6488775c94-", Namespace:"calico-system", SelfLink:"", UID:"c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 41, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6488775c94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-127", ContainerID:"", Pod:"calico-kube-controllers-6488775c94-xpm9n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia8ad2c80264", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:41:34.077613 containerd[1894]: 2026-01-13 23:41:34.010 [INFO][4864] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.2/32] ContainerID="d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4" Namespace="calico-system" Pod="calico-kube-controllers-6488775c94-xpm9n" WorkloadEndpoint="ip--172--31--24--127-k8s-calico--kube--controllers--6488775c94--xpm9n-eth0" Jan 13 23:41:34.077613 containerd[1894]: 2026-01-13 23:41:34.010 [INFO][4864] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8ad2c80264 ContainerID="d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4" Namespace="calico-system" Pod="calico-kube-controllers-6488775c94-xpm9n" WorkloadEndpoint="ip--172--31--24--127-k8s-calico--kube--controllers--6488775c94--xpm9n-eth0" Jan 13 23:41:34.077613 containerd[1894]: 2026-01-13 23:41:34.030 [INFO][4864] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4" Namespace="calico-system" Pod="calico-kube-controllers-6488775c94-xpm9n" WorkloadEndpoint="ip--172--31--24--127-k8s-calico--kube--controllers--6488775c94--xpm9n-eth0" Jan 13 23:41:34.077777 containerd[1894]: 2026-01-13 23:41:34.038 [INFO][4864] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4" Namespace="calico-system" Pod="calico-kube-controllers-6488775c94-xpm9n" WorkloadEndpoint="ip--172--31--24--127-k8s-calico--kube--controllers--6488775c94--xpm9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--127-k8s-calico--kube--controllers--6488775c94--xpm9n-eth0", GenerateName:"calico-kube-controllers-6488775c94-", Namespace:"calico-system", SelfLink:"", UID:"c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 41, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6488775c94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-127", ContainerID:"d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4", Pod:"calico-kube-controllers-6488775c94-xpm9n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia8ad2c80264", MAC:"e6:7f:67:64:35:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:41:34.079496 containerd[1894]: 2026-01-13 23:41:34.065 [INFO][4864] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4" Namespace="calico-system" Pod="calico-kube-controllers-6488775c94-xpm9n" WorkloadEndpoint="ip--172--31--24--127-k8s-calico--kube--controllers--6488775c94--xpm9n-eth0" Jan 13 23:41:34.150395 containerd[1894]: time="2026-01-13T23:41:34.150122680Z" level=info msg="connecting to shim d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4" address="unix:///run/containerd/s/2c5c77390687a8dde0b77befa886db3f70704bd5cca7544fe4ac859b626314fe" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:41:34.154000 audit[4950]: NETFILTER_CFG table=filter:126 family=2 entries=36 op=nft_register_chain pid=4950 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:41:34.154000 audit[4950]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=ffffd8c2ac60 a2=0 a3=ffffb426efa8 items=0 ppid=4650 pid=4950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.154000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:41:34.177062 systemd-networkd[1617]: cali7c1d19f53b3: Link UP Jan 13 23:41:34.178461 systemd-networkd[1617]: cali7c1d19f53b3: Gained carrier Jan 13 23:41:34.233414 containerd[1894]: 2026-01-13 23:41:33.721 [INFO][4873] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--127-k8s-coredns--66bc5c9577--dpd56-eth0 coredns-66bc5c9577- kube-system 53d99ef2-7e93-4ffd-bfa2-64159cfed963 899 0 2026-01-13 23:40:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-127 coredns-66bc5c9577-dpd56 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7c1d19f53b3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38" Namespace="kube-system" Pod="coredns-66bc5c9577-dpd56" WorkloadEndpoint="ip--172--31--24--127-k8s-coredns--66bc5c9577--dpd56-" Jan 13 23:41:34.233414 containerd[1894]: 2026-01-13 23:41:33.722 [INFO][4873] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38" Namespace="kube-system" Pod="coredns-66bc5c9577-dpd56" WorkloadEndpoint="ip--172--31--24--127-k8s-coredns--66bc5c9577--dpd56-eth0" Jan 13 23:41:34.233414 containerd[1894]: 2026-01-13 23:41:33.866 [INFO][4919] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38" HandleID="k8s-pod-network.8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38" Workload="ip--172--31--24--127-k8s-coredns--66bc5c9577--dpd56-eth0" Jan 13 23:41:34.233769 containerd[1894]: 2026-01-13 23:41:33.867 [INFO][4919] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38" HandleID="k8s-pod-network.8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38" Workload="ip--172--31--24--127-k8s-coredns--66bc5c9577--dpd56-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400010e690), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-127", "pod":"coredns-66bc5c9577-dpd56", "timestamp":"2026-01-13 23:41:33.866642757 +0000 UTC"}, Hostname:"ip-172-31-24-127", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:41:34.233769 containerd[1894]: 2026-01-13 23:41:33.867 [INFO][4919] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:41:34.233769 containerd[1894]: 2026-01-13 23:41:34.004 [INFO][4919] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:41:34.233769 containerd[1894]: 2026-01-13 23:41:34.005 [INFO][4919] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-127' Jan 13 23:41:34.233769 containerd[1894]: 2026-01-13 23:41:34.045 [INFO][4919] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38" host="ip-172-31-24-127" Jan 13 23:41:34.233769 containerd[1894]: 2026-01-13 23:41:34.062 [INFO][4919] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-127" Jan 13 23:41:34.233769 containerd[1894]: 2026-01-13 23:41:34.095 [INFO][4919] ipam/ipam.go 511: Trying affinity for 192.168.115.0/26 host="ip-172-31-24-127" Jan 13 23:41:34.233769 containerd[1894]: 2026-01-13 23:41:34.108 [INFO][4919] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.0/26 host="ip-172-31-24-127" Jan 13 23:41:34.233769 containerd[1894]: 2026-01-13 23:41:34.117 [INFO][4919] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.0/26 host="ip-172-31-24-127" Jan 13 23:41:34.234511 containerd[1894]: 2026-01-13 23:41:34.118 [INFO][4919] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.0/26 handle="k8s-pod-network.8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38" host="ip-172-31-24-127" Jan 13 23:41:34.234511 containerd[1894]: 2026-01-13 23:41:34.127 [INFO][4919] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38 Jan 13 23:41:34.234511 containerd[1894]: 2026-01-13 23:41:34.140 [INFO][4919] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.0/26 handle="k8s-pod-network.8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38" host="ip-172-31-24-127" Jan 13 23:41:34.234511 containerd[1894]: 2026-01-13 23:41:34.161 [INFO][4919] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.3/26] block=192.168.115.0/26 handle="k8s-pod-network.8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38" host="ip-172-31-24-127" Jan 13 23:41:34.234511 containerd[1894]: 2026-01-13 23:41:34.164 [INFO][4919] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.3/26] handle="k8s-pod-network.8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38" host="ip-172-31-24-127" Jan 13 23:41:34.234511 containerd[1894]: 2026-01-13 23:41:34.164 [INFO][4919] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:41:34.234511 containerd[1894]: 2026-01-13 23:41:34.164 [INFO][4919] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.3/26] IPv6=[] ContainerID="8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38" HandleID="k8s-pod-network.8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38" Workload="ip--172--31--24--127-k8s-coredns--66bc5c9577--dpd56-eth0" Jan 13 23:41:34.234972 containerd[1894]: 2026-01-13 23:41:34.171 [INFO][4873] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38" Namespace="kube-system" Pod="coredns-66bc5c9577-dpd56" WorkloadEndpoint="ip--172--31--24--127-k8s-coredns--66bc5c9577--dpd56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--127-k8s-coredns--66bc5c9577--dpd56-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"53d99ef2-7e93-4ffd-bfa2-64159cfed963", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 40, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-127", ContainerID:"", Pod:"coredns-66bc5c9577-dpd56", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c1d19f53b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:41:34.234972 containerd[1894]: 2026-01-13 23:41:34.172 [INFO][4873] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.3/32] ContainerID="8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38" Namespace="kube-system" Pod="coredns-66bc5c9577-dpd56" WorkloadEndpoint="ip--172--31--24--127-k8s-coredns--66bc5c9577--dpd56-eth0" Jan 13 23:41:34.234972 containerd[1894]: 2026-01-13 23:41:34.172 [INFO][4873] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c1d19f53b3 ContainerID="8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38" Namespace="kube-system" Pod="coredns-66bc5c9577-dpd56" WorkloadEndpoint="ip--172--31--24--127-k8s-coredns--66bc5c9577--dpd56-eth0" Jan 13 23:41:34.234972 containerd[1894]: 2026-01-13 23:41:34.179 [INFO][4873] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38" Namespace="kube-system" Pod="coredns-66bc5c9577-dpd56" WorkloadEndpoint="ip--172--31--24--127-k8s-coredns--66bc5c9577--dpd56-eth0" Jan 13 23:41:34.234972 containerd[1894]: 2026-01-13 23:41:34.180 [INFO][4873] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38" Namespace="kube-system" Pod="coredns-66bc5c9577-dpd56" WorkloadEndpoint="ip--172--31--24--127-k8s-coredns--66bc5c9577--dpd56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--127-k8s-coredns--66bc5c9577--dpd56-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"53d99ef2-7e93-4ffd-bfa2-64159cfed963", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 40, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-127", ContainerID:"8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38", Pod:"coredns-66bc5c9577-dpd56", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c1d19f53b3", MAC:"de:c8:cf:5c:ab:6e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:41:34.234972 containerd[1894]: 2026-01-13 23:41:34.222 [INFO][4873] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38" Namespace="kube-system" Pod="coredns-66bc5c9577-dpd56" WorkloadEndpoint="ip--172--31--24--127-k8s-coredns--66bc5c9577--dpd56-eth0" Jan 13 23:41:34.255305 systemd[1]: Started cri-containerd-d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4.scope - libcontainer container d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4. Jan 13 23:41:34.283050 containerd[1894]: time="2026-01-13T23:41:34.282372265Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:41:34.284340 containerd[1894]: time="2026-01-13T23:41:34.284181281Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 13 23:41:34.284523 containerd[1894]: time="2026-01-13T23:41:34.284323024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 13 23:41:34.285196 kubelet[3516]: E0113 23:41:34.285134 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:41:34.287434 kubelet[3516]: E0113 23:41:34.287053 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:41:34.287702 kubelet[3516]: E0113 23:41:34.287635 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-d477b98f6-wrppl_calico-system(999f0f43-0933-4443-a84f-03be4dcf7cf6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 13 23:41:34.288989 kubelet[3516]: E0113 23:41:34.288174 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d477b98f6-wrppl" podUID="999f0f43-0933-4443-a84f-03be4dcf7cf6" Jan 13 23:41:34.315194 containerd[1894]: time="2026-01-13T23:41:34.315100021Z" level=info msg="connecting to shim 8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38" address="unix:///run/containerd/s/6a09c8413ac68bd41d8090cc46019fb60ffa3c3083dbaeecf263f60becd26906" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:41:34.323000 audit: BPF prog-id=217 op=LOAD Jan 13 23:41:34.325000 audit: BPF prog-id=218 op=LOAD Jan 13 23:41:34.325000 audit[4973]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4960 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438343732313739316135356434643239343861316463343466376138 Jan 13 23:41:34.325000 audit: BPF prog-id=218 op=UNLOAD Jan 13 23:41:34.325000 audit[4973]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4960 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438343732313739316135356434643239343861316463343466376138 Jan 13 23:41:34.329000 audit: BPF prog-id=219 op=LOAD Jan 13 23:41:34.329000 audit[4973]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4960 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438343732313739316135356434643239343861316463343466376138 Jan 13 23:41:34.329000 audit: BPF prog-id=220 op=LOAD Jan 13 23:41:34.329000 audit[4973]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4960 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438343732313739316135356434643239343861316463343466376138 Jan 13 23:41:34.329000 audit: BPF prog-id=220 op=UNLOAD Jan 13 23:41:34.329000 audit[4973]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4960 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438343732313739316135356434643239343861316463343466376138 Jan 13 23:41:34.329000 audit: BPF prog-id=219 op=UNLOAD Jan 13 23:41:34.329000 audit[4973]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4960 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438343732313739316135356434643239343861316463343466376138 Jan 13 23:41:34.329000 audit: BPF prog-id=221 op=LOAD Jan 13 23:41:34.329000 audit[4973]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4960 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438343732313739316135356434643239343861316463343466376138 Jan 13 23:41:34.372247 systemd-networkd[1617]: vxlan.calico: Gained IPv6LL Jan 13 23:41:34.412360 systemd[1]: Started cri-containerd-8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38.scope - libcontainer container 8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38. Jan 13 23:41:34.477000 audit: BPF prog-id=222 op=LOAD Jan 13 23:41:34.481983 kernel: kauditd_printk_skb: 253 callbacks suppressed Jan 13 23:41:34.482140 kernel: audit: type=1334 audit(1768347694.477:674): prog-id=222 op=LOAD Jan 13 23:41:34.480000 audit[5040]: NETFILTER_CFG table=filter:127 family=2 entries=46 op=nft_register_chain pid=5040 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:41:34.496969 kernel: audit: type=1325 audit(1768347694.480:675): table=filter:127 family=2 entries=46 op=nft_register_chain pid=5040 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:41:34.497320 kernel: audit: type=1300 audit(1768347694.480:675): arch=c00000b7 syscall=211 success=yes exit=23740 a0=3 a1=ffffde6d2bb0 a2=0 a3=ffff7faecfa8 items=0 ppid=4650 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.480000 audit[5040]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23740 a0=3 a1=ffffde6d2bb0 a2=0 a3=ffff7faecfa8 items=0 ppid=4650 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.480000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:41:34.504674 kernel: audit: type=1327 audit(1768347694.480:675): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:41:34.485000 audit: BPF prog-id=223 op=LOAD Jan 13 23:41:34.508528 kernel: audit: type=1334 audit(1768347694.485:676): prog-id=223 op=LOAD Jan 13 23:41:34.485000 audit[5019]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5006 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.509130 containerd[1894]: time="2026-01-13T23:41:34.506871788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6488775c94-xpm9n,Uid:c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a,Namespace:calico-system,Attempt:0,} returns sandbox id \"d84721791a55d4d2948a1dc44f7a8fec7d2504435f1f36d795bdb6206bd18af4\"" Jan 13 23:41:34.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863353631616133623837613466633438343539383463623532303266 Jan 13 23:41:34.518033 kernel: audit: type=1300 audit(1768347694.485:676): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5006 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.524720 containerd[1894]: time="2026-01-13T23:41:34.519466154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 13 23:41:34.500000 audit: BPF prog-id=223 op=UNLOAD Jan 13 23:41:34.527621 kernel: audit: type=1327 audit(1768347694.485:676): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863353631616133623837613466633438343539383463623532303266 Jan 13 23:41:34.527746 kernel: audit: type=1334 audit(1768347694.500:677): prog-id=223 op=UNLOAD Jan 13 23:41:34.500000 audit[5019]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5006 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.534683 kernel: audit: type=1300 audit(1768347694.500:677): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5006 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.500000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863353631616133623837613466633438343539383463623532303266 Jan 13 23:41:34.541545 kernel: audit: type=1327 audit(1768347694.500:677): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863353631616133623837613466633438343539383463623532303266 Jan 13 23:41:34.502000 audit: BPF prog-id=224 op=LOAD Jan 13 23:41:34.502000 audit[5019]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5006 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863353631616133623837613466633438343539383463623532303266 Jan 13 23:41:34.527000 audit: BPF prog-id=225 op=LOAD Jan 13 23:41:34.527000 audit[5019]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5006 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863353631616133623837613466633438343539383463623532303266 Jan 13 23:41:34.540000 audit: BPF prog-id=225 op=UNLOAD Jan 13 23:41:34.540000 audit[5019]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5006 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863353631616133623837613466633438343539383463623532303266 Jan 13 23:41:34.540000 audit: BPF prog-id=224 op=UNLOAD Jan 13 23:41:34.540000 audit[5019]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5006 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863353631616133623837613466633438343539383463623532303266 Jan 13 23:41:34.540000 audit: BPF prog-id=226 op=LOAD Jan 13 23:41:34.540000 audit[5019]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5006 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863353631616133623837613466633438343539383463623532303266 Jan 13 23:41:34.632806 containerd[1894]: time="2026-01-13T23:41:34.632422985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dpd56,Uid:53d99ef2-7e93-4ffd-bfa2-64159cfed963,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38\"" Jan 13 23:41:34.650966 containerd[1894]: time="2026-01-13T23:41:34.650852796Z" level=info msg="CreateContainer within sandbox \"8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 23:41:34.729804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3037293692.mount: Deactivated successfully. Jan 13 23:41:34.732566 containerd[1894]: time="2026-01-13T23:41:34.732498039Z" level=info msg="Container 719127f8d3f665e00785250102763dd9385398ad94a98a1050e16ee64eb952c1: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:41:34.747351 containerd[1894]: time="2026-01-13T23:41:34.747264278Z" level=info msg="CreateContainer within sandbox \"8c561aa3b87a4fc4845984cb5202f56341cb0dbf58d70cf07535e06ac0b34d38\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"719127f8d3f665e00785250102763dd9385398ad94a98a1050e16ee64eb952c1\"" Jan 13 23:41:34.751983 containerd[1894]: time="2026-01-13T23:41:34.750359197Z" level=info msg="StartContainer for \"719127f8d3f665e00785250102763dd9385398ad94a98a1050e16ee64eb952c1\"" Jan 13 23:41:34.754702 containerd[1894]: time="2026-01-13T23:41:34.754643154Z" level=info msg="connecting to shim 719127f8d3f665e00785250102763dd9385398ad94a98a1050e16ee64eb952c1" address="unix:///run/containerd/s/6a09c8413ac68bd41d8090cc46019fb60ffa3c3083dbaeecf263f60becd26906" protocol=ttrpc version=3 Jan 13 23:41:34.794482 systemd[1]: Started cri-containerd-719127f8d3f665e00785250102763dd9385398ad94a98a1050e16ee64eb952c1.scope - libcontainer container 719127f8d3f665e00785250102763dd9385398ad94a98a1050e16ee64eb952c1. Jan 13 23:41:34.821000 audit: BPF prog-id=227 op=LOAD Jan 13 23:41:34.823000 audit: BPF prog-id=228 op=LOAD Jan 13 23:41:34.823000 audit[5052]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=5006 pid=5052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731393132376638643366363635653030373835323530313032373633 Jan 13 23:41:34.823000 audit: BPF prog-id=228 op=UNLOAD Jan 13 23:41:34.823000 audit[5052]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5006 pid=5052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731393132376638643366363635653030373835323530313032373633 Jan 13 23:41:34.824000 audit: BPF prog-id=229 op=LOAD Jan 13 23:41:34.824000 audit[5052]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=5006 pid=5052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731393132376638643366363635653030373835323530313032373633 Jan 13 23:41:34.825000 audit: BPF prog-id=230 op=LOAD Jan 13 23:41:34.825000 audit[5052]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=5006 pid=5052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731393132376638643366363635653030373835323530313032373633 Jan 13 23:41:34.825000 audit: BPF prog-id=230 op=UNLOAD Jan 13 23:41:34.825000 audit[5052]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5006 pid=5052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731393132376638643366363635653030373835323530313032373633 Jan 13 23:41:34.825000 audit: BPF prog-id=229 op=UNLOAD Jan 13 23:41:34.825000 audit[5052]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5006 pid=5052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731393132376638643366363635653030373835323530313032373633 Jan 13 23:41:34.825000 audit: BPF prog-id=231 op=LOAD Jan 13 23:41:34.825000 audit[5052]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=5006 pid=5052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:34.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731393132376638643366363635653030373835323530313032373633 Jan 13 23:41:34.861504 containerd[1894]: time="2026-01-13T23:41:34.861347284Z" level=info msg="StartContainer for \"719127f8d3f665e00785250102763dd9385398ad94a98a1050e16ee64eb952c1\" returns successfully" Jan 13 23:41:34.948220 systemd-networkd[1617]: calif66d11f0612: Gained IPv6LL Jan 13 23:41:34.962315 containerd[1894]: time="2026-01-13T23:41:34.962248806Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:41:34.963659 containerd[1894]: time="2026-01-13T23:41:34.963579960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 13 23:41:34.963940 containerd[1894]: time="2026-01-13T23:41:34.963743121Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 13 23:41:34.965033 kubelet[3516]: E0113 23:41:34.964973 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:41:34.965234 kubelet[3516]: E0113 23:41:34.965034 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:41:34.965234 kubelet[3516]: E0113 23:41:34.965140 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6488775c94-xpm9n_calico-system(c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 13 23:41:34.965234 kubelet[3516]: E0113 23:41:34.965190 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6488775c94-xpm9n" podUID="c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a" Jan 13 23:41:34.977461 kubelet[3516]: E0113 23:41:34.977246 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d477b98f6-wrppl" podUID="999f0f43-0933-4443-a84f-03be4dcf7cf6" Jan 13 23:41:35.038292 kubelet[3516]: I0113 23:41:35.037939 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dpd56" podStartSLOduration=58.037805542 podStartE2EDuration="58.037805542s" podCreationTimestamp="2026-01-13 23:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:41:35.036224856 +0000 UTC m=+62.912251228" watchObservedRunningTime="2026-01-13 23:41:35.037805542 +0000 UTC m=+62.913831902" Jan 13 23:41:35.332522 systemd-networkd[1617]: cali7c1d19f53b3: Gained IPv6LL Jan 13 23:41:35.398502 containerd[1894]: time="2026-01-13T23:41:35.398433705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p84n5,Uid:e305c05b-4fdf-40a3-854a-8a106f493072,Namespace:calico-system,Attempt:0,}" Jan 13 23:41:35.555000 audit[5090]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=5090 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:35.555000 audit[5090]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe7cf5860 a2=0 a3=1 items=0 ppid=3663 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:35.555000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:35.560000 audit[5090]: NETFILTER_CFG table=nat:129 family=2 entries=14 op=nft_register_rule pid=5090 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:35.560000 audit[5090]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe7cf5860 a2=0 a3=1 items=0 ppid=3663 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:35.560000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:35.639290 systemd-networkd[1617]: cali87c33c051ba: Link UP Jan 13 23:41:35.641209 systemd-networkd[1617]: cali87c33c051ba: Gained carrier Jan 13 23:41:35.674302 containerd[1894]: 2026-01-13 23:41:35.481 [INFO][5080] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--127-k8s-csi--node--driver--p84n5-eth0 csi-node-driver- calico-system e305c05b-4fdf-40a3-854a-8a106f493072 783 0 2026-01-13 23:41:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-24-127 csi-node-driver-p84n5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali87c33c051ba [] [] }} ContainerID="ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461" Namespace="calico-system" Pod="csi-node-driver-p84n5" WorkloadEndpoint="ip--172--31--24--127-k8s-csi--node--driver--p84n5-" Jan 13 23:41:35.674302 containerd[1894]: 2026-01-13 23:41:35.482 [INFO][5080] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461" Namespace="calico-system" Pod="csi-node-driver-p84n5" WorkloadEndpoint="ip--172--31--24--127-k8s-csi--node--driver--p84n5-eth0" Jan 13 23:41:35.674302 containerd[1894]: 2026-01-13 23:41:35.554 [INFO][5093] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461" HandleID="k8s-pod-network.ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461" Workload="ip--172--31--24--127-k8s-csi--node--driver--p84n5-eth0" Jan 13 23:41:35.674302 containerd[1894]: 2026-01-13 23:41:35.554 [INFO][5093] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461" HandleID="k8s-pod-network.ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461" Workload="ip--172--31--24--127-k8s-csi--node--driver--p84n5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb2a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-127", "pod":"csi-node-driver-p84n5", "timestamp":"2026-01-13 23:41:35.55417968 +0000 UTC"}, Hostname:"ip-172-31-24-127", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:41:35.674302 containerd[1894]: 2026-01-13 23:41:35.554 [INFO][5093] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:41:35.674302 containerd[1894]: 2026-01-13 23:41:35.554 [INFO][5093] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:41:35.674302 containerd[1894]: 2026-01-13 23:41:35.554 [INFO][5093] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-127' Jan 13 23:41:35.674302 containerd[1894]: 2026-01-13 23:41:35.576 [INFO][5093] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461" host="ip-172-31-24-127" Jan 13 23:41:35.674302 containerd[1894]: 2026-01-13 23:41:35.587 [INFO][5093] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-127" Jan 13 23:41:35.674302 containerd[1894]: 2026-01-13 23:41:35.596 [INFO][5093] ipam/ipam.go 511: Trying affinity for 192.168.115.0/26 host="ip-172-31-24-127" Jan 13 23:41:35.674302 containerd[1894]: 2026-01-13 23:41:35.601 [INFO][5093] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.0/26 host="ip-172-31-24-127" Jan 13 23:41:35.674302 containerd[1894]: 2026-01-13 23:41:35.606 [INFO][5093] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.0/26 host="ip-172-31-24-127" Jan 13 23:41:35.674302 containerd[1894]: 2026-01-13 23:41:35.606 [INFO][5093] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.0/26 handle="k8s-pod-network.ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461" host="ip-172-31-24-127" Jan 13 23:41:35.674302 containerd[1894]: 2026-01-13 23:41:35.609 [INFO][5093] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461 Jan 13 23:41:35.674302 containerd[1894]: 2026-01-13 23:41:35.617 [INFO][5093] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.0/26 handle="k8s-pod-network.ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461" host="ip-172-31-24-127" Jan 13 23:41:35.674302 containerd[1894]: 2026-01-13 23:41:35.629 [INFO][5093] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.4/26] block=192.168.115.0/26 handle="k8s-pod-network.ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461" host="ip-172-31-24-127" Jan 13 23:41:35.674302 containerd[1894]: 2026-01-13 23:41:35.629 [INFO][5093] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.4/26] handle="k8s-pod-network.ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461" host="ip-172-31-24-127" Jan 13 23:41:35.674302 containerd[1894]: 2026-01-13 23:41:35.629 [INFO][5093] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:41:35.674302 containerd[1894]: 2026-01-13 23:41:35.629 [INFO][5093] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.4/26] IPv6=[] ContainerID="ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461" HandleID="k8s-pod-network.ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461" Workload="ip--172--31--24--127-k8s-csi--node--driver--p84n5-eth0" Jan 13 23:41:35.675655 containerd[1894]: 2026-01-13 23:41:35.634 [INFO][5080] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461" Namespace="calico-system" Pod="csi-node-driver-p84n5" WorkloadEndpoint="ip--172--31--24--127-k8s-csi--node--driver--p84n5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--127-k8s-csi--node--driver--p84n5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e305c05b-4fdf-40a3-854a-8a106f493072", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-127", ContainerID:"", Pod:"csi-node-driver-p84n5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali87c33c051ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:41:35.675655 containerd[1894]: 2026-01-13 23:41:35.634 [INFO][5080] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.4/32] ContainerID="ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461" Namespace="calico-system" Pod="csi-node-driver-p84n5" WorkloadEndpoint="ip--172--31--24--127-k8s-csi--node--driver--p84n5-eth0" Jan 13 23:41:35.675655 containerd[1894]: 2026-01-13 23:41:35.634 [INFO][5080] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87c33c051ba ContainerID="ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461" Namespace="calico-system" Pod="csi-node-driver-p84n5" WorkloadEndpoint="ip--172--31--24--127-k8s-csi--node--driver--p84n5-eth0" Jan 13 23:41:35.675655 containerd[1894]: 2026-01-13 23:41:35.642 [INFO][5080] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461" Namespace="calico-system" Pod="csi-node-driver-p84n5" WorkloadEndpoint="ip--172--31--24--127-k8s-csi--node--driver--p84n5-eth0" Jan 13 23:41:35.675655 containerd[1894]: 2026-01-13 23:41:35.642 [INFO][5080] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461" Namespace="calico-system" Pod="csi-node-driver-p84n5" WorkloadEndpoint="ip--172--31--24--127-k8s-csi--node--driver--p84n5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--127-k8s-csi--node--driver--p84n5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e305c05b-4fdf-40a3-854a-8a106f493072", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-127", ContainerID:"ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461", Pod:"csi-node-driver-p84n5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali87c33c051ba", MAC:"ce:cd:a5:13:48:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:41:35.675655 containerd[1894]: 2026-01-13 23:41:35.666 [INFO][5080] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461" Namespace="calico-system" Pod="csi-node-driver-p84n5" WorkloadEndpoint="ip--172--31--24--127-k8s-csi--node--driver--p84n5-eth0" Jan 13 23:41:35.711158 containerd[1894]: time="2026-01-13T23:41:35.710744380Z" level=info msg="connecting to shim ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461" address="unix:///run/containerd/s/62184f2d4348e9fdcd19f40efcb3653d9416c09fdebfb3b52cea0c77a40e488c" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:41:35.749000 audit[5122]: NETFILTER_CFG table=filter:130 family=2 entries=44 op=nft_register_chain pid=5122 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:41:35.749000 audit[5122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21952 a0=3 a1=ffffd44cdc30 a2=0 a3=ffff8c4bcfa8 items=0 ppid=4650 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:35.749000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:41:35.781488 systemd[1]: Started cri-containerd-ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461.scope - libcontainer container ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461. Jan 13 23:41:35.848000 audit: BPF prog-id=232 op=LOAD Jan 13 23:41:35.850000 audit: BPF prog-id=233 op=LOAD Jan 13 23:41:35.850000 audit[5131]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5118 pid=5131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:35.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565313661656465613234373034666366653030373830366434383036 Jan 13 23:41:35.851000 audit: BPF prog-id=233 op=UNLOAD Jan 13 23:41:35.851000 audit[5131]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5118 pid=5131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:35.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565313661656465613234373034666366653030373830366434383036 Jan 13 23:41:35.852000 audit: BPF prog-id=234 op=LOAD Jan 13 23:41:35.852000 audit[5131]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5118 pid=5131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:35.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565313661656465613234373034666366653030373830366434383036 Jan 13 23:41:35.853000 audit: BPF prog-id=235 op=LOAD Jan 13 23:41:35.853000 audit[5131]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=5118 pid=5131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:35.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565313661656465613234373034666366653030373830366434383036 Jan 13 23:41:35.853000 audit: BPF prog-id=235 op=UNLOAD Jan 13 23:41:35.853000 audit[5131]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5118 pid=5131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:35.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565313661656465613234373034666366653030373830366434383036 Jan 13 23:41:35.853000 audit: BPF prog-id=234 op=UNLOAD Jan 13 23:41:35.853000 audit[5131]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5118 pid=5131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:35.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565313661656465613234373034666366653030373830366434383036 Jan 13 23:41:35.853000 audit: BPF prog-id=236 op=LOAD Jan 13 23:41:35.853000 audit[5131]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=5118 pid=5131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:35.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565313661656465613234373034666366653030373830366434383036 Jan 13 23:41:35.973227 systemd-networkd[1617]: calia8ad2c80264: Gained IPv6LL Jan 13 23:41:35.990441 kubelet[3516]: E0113 23:41:35.990320 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6488775c94-xpm9n" podUID="c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a" Jan 13 23:41:36.020934 containerd[1894]: time="2026-01-13T23:41:36.019537730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p84n5,Uid:e305c05b-4fdf-40a3-854a-8a106f493072,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee16aedea24704fcfe007806d48064f7c709d4d05c63f6373147a78cef9e1461\"" Jan 13 23:41:36.027491 containerd[1894]: time="2026-01-13T23:41:36.027384683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 13 23:41:36.293647 containerd[1894]: time="2026-01-13T23:41:36.293504014Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:41:36.295046 containerd[1894]: time="2026-01-13T23:41:36.294866768Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 13 23:41:36.295046 containerd[1894]: time="2026-01-13T23:41:36.294955372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 13 23:41:36.295348 kubelet[3516]: E0113 23:41:36.295201 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:41:36.295348 kubelet[3516]: E0113 23:41:36.295257 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:41:36.295472 kubelet[3516]: E0113 23:41:36.295365 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-p84n5_calico-system(e305c05b-4fdf-40a3-854a-8a106f493072): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 13 23:41:36.297363 containerd[1894]: time="2026-01-13T23:41:36.297263909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 13 23:41:36.401112 containerd[1894]: time="2026-01-13T23:41:36.400449945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7569cdf946-hmfxf,Uid:0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb,Namespace:calico-apiserver,Attempt:0,}" Jan 13 23:41:36.405382 containerd[1894]: time="2026-01-13T23:41:36.404835557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7569cdf946-2r7qk,Uid:bc4e2f94-7e3c-446b-9bce-55e8c6abc38d,Namespace:calico-apiserver,Attempt:0,}" Jan 13 23:41:36.406298 containerd[1894]: time="2026-01-13T23:41:36.405831143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xqbbk,Uid:497e3e66-1726-45f4-a990-23061cc5868e,Namespace:calico-system,Attempt:0,}" Jan 13 23:41:36.568488 containerd[1894]: time="2026-01-13T23:41:36.568233436Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:41:36.573122 containerd[1894]: time="2026-01-13T23:41:36.572990370Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 13 23:41:36.573540 containerd[1894]: time="2026-01-13T23:41:36.573174302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 13 23:41:36.575354 kubelet[3516]: E0113 23:41:36.573935 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:41:36.575354 kubelet[3516]: E0113 23:41:36.574060 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:41:36.577230 kubelet[3516]: E0113 23:41:36.577017 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-p84n5_calico-system(e305c05b-4fdf-40a3-854a-8a106f493072): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 13 23:41:36.577230 kubelet[3516]: E0113 23:41:36.577121 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p84n5" podUID="e305c05b-4fdf-40a3-854a-8a106f493072" Jan 13 23:41:36.641000 audit[5197]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=5197 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:36.641000 audit[5197]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffca244470 a2=0 a3=1 items=0 ppid=3663 pid=5197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:36.641000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:36.654000 audit[5197]: NETFILTER_CFG table=nat:132 family=2 entries=14 op=nft_register_rule pid=5197 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:36.654000 audit[5197]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffca244470 a2=0 a3=1 items=0 ppid=3663 pid=5197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:36.654000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:36.870386 systemd-networkd[1617]: cali53170591cfd: Link UP Jan 13 23:41:36.873699 systemd-networkd[1617]: cali53170591cfd: Gained carrier Jan 13 23:41:36.930831 containerd[1894]: 2026-01-13 23:41:36.571 [INFO][5168] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--2r7qk-eth0 calico-apiserver-7569cdf946- calico-apiserver bc4e2f94-7e3c-446b-9bce-55e8c6abc38d 894 0 2026-01-13 23:40:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7569cdf946 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-127 calico-apiserver-7569cdf946-2r7qk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali53170591cfd [] [] }} ContainerID="74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006" Namespace="calico-apiserver" Pod="calico-apiserver-7569cdf946-2r7qk" WorkloadEndpoint="ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--2r7qk-" Jan 13 23:41:36.930831 containerd[1894]: 2026-01-13 23:41:36.572 [INFO][5168] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006" Namespace="calico-apiserver" Pod="calico-apiserver-7569cdf946-2r7qk" WorkloadEndpoint="ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--2r7qk-eth0" Jan 13 23:41:36.930831 containerd[1894]: 2026-01-13 23:41:36.731 [INFO][5196] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006" HandleID="k8s-pod-network.74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006" Workload="ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--2r7qk-eth0" Jan 13 23:41:36.930831 containerd[1894]: 2026-01-13 23:41:36.732 [INFO][5196] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006" HandleID="k8s-pod-network.74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006" Workload="ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--2r7qk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000301a80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-24-127", "pod":"calico-apiserver-7569cdf946-2r7qk", "timestamp":"2026-01-13 23:41:36.731649529 +0000 UTC"}, Hostname:"ip-172-31-24-127", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:41:36.930831 containerd[1894]: 2026-01-13 23:41:36.732 [INFO][5196] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:41:36.930831 containerd[1894]: 2026-01-13 23:41:36.732 [INFO][5196] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:41:36.930831 containerd[1894]: 2026-01-13 23:41:36.732 [INFO][5196] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-127' Jan 13 23:41:36.930831 containerd[1894]: 2026-01-13 23:41:36.774 [INFO][5196] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006" host="ip-172-31-24-127" Jan 13 23:41:36.930831 containerd[1894]: 2026-01-13 23:41:36.789 [INFO][5196] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-127" Jan 13 23:41:36.930831 containerd[1894]: 2026-01-13 23:41:36.802 [INFO][5196] ipam/ipam.go 511: Trying affinity for 192.168.115.0/26 host="ip-172-31-24-127" Jan 13 23:41:36.930831 containerd[1894]: 2026-01-13 23:41:36.806 [INFO][5196] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.0/26 host="ip-172-31-24-127" Jan 13 23:41:36.930831 containerd[1894]: 2026-01-13 23:41:36.814 [INFO][5196] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.0/26 host="ip-172-31-24-127" Jan 13 23:41:36.930831 containerd[1894]: 2026-01-13 23:41:36.814 [INFO][5196] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.0/26 handle="k8s-pod-network.74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006" host="ip-172-31-24-127" Jan 13 23:41:36.930831 containerd[1894]: 2026-01-13 23:41:36.821 [INFO][5196] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006 Jan 13 23:41:36.930831 containerd[1894]: 2026-01-13 23:41:36.833 [INFO][5196] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.0/26 handle="k8s-pod-network.74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006" host="ip-172-31-24-127" Jan 13 23:41:36.930831 containerd[1894]: 2026-01-13 23:41:36.844 [INFO][5196] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.5/26] block=192.168.115.0/26 handle="k8s-pod-network.74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006" host="ip-172-31-24-127" Jan 13 23:41:36.930831 containerd[1894]: 2026-01-13 23:41:36.845 [INFO][5196] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.5/26] handle="k8s-pod-network.74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006" host="ip-172-31-24-127" Jan 13 23:41:36.930831 containerd[1894]: 2026-01-13 23:41:36.845 [INFO][5196] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:41:36.930831 containerd[1894]: 2026-01-13 23:41:36.845 [INFO][5196] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.5/26] IPv6=[] ContainerID="74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006" HandleID="k8s-pod-network.74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006" Workload="ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--2r7qk-eth0" Jan 13 23:41:36.934375 containerd[1894]: 2026-01-13 23:41:36.853 [INFO][5168] cni-plugin/k8s.go 418: Populated endpoint ContainerID="74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006" Namespace="calico-apiserver" Pod="calico-apiserver-7569cdf946-2r7qk" WorkloadEndpoint="ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--2r7qk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--2r7qk-eth0", GenerateName:"calico-apiserver-7569cdf946-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc4e2f94-7e3c-446b-9bce-55e8c6abc38d", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 40, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7569cdf946", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-127", ContainerID:"", Pod:"calico-apiserver-7569cdf946-2r7qk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali53170591cfd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:41:36.934375 containerd[1894]: 2026-01-13 23:41:36.853 [INFO][5168] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.5/32] ContainerID="74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006" Namespace="calico-apiserver" Pod="calico-apiserver-7569cdf946-2r7qk" WorkloadEndpoint="ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--2r7qk-eth0" Jan 13 23:41:36.934375 containerd[1894]: 2026-01-13 23:41:36.854 [INFO][5168] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53170591cfd ContainerID="74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006" Namespace="calico-apiserver" Pod="calico-apiserver-7569cdf946-2r7qk" WorkloadEndpoint="ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--2r7qk-eth0" Jan 13 23:41:36.934375 containerd[1894]: 2026-01-13 23:41:36.878 [INFO][5168] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006" Namespace="calico-apiserver" Pod="calico-apiserver-7569cdf946-2r7qk" WorkloadEndpoint="ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--2r7qk-eth0" Jan 13 23:41:36.934375 containerd[1894]: 2026-01-13 23:41:36.881 [INFO][5168] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006" Namespace="calico-apiserver" Pod="calico-apiserver-7569cdf946-2r7qk" WorkloadEndpoint="ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--2r7qk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--2r7qk-eth0", GenerateName:"calico-apiserver-7569cdf946-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc4e2f94-7e3c-446b-9bce-55e8c6abc38d", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 40, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7569cdf946", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-127", ContainerID:"74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006", Pod:"calico-apiserver-7569cdf946-2r7qk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali53170591cfd", MAC:"5e:79:3d:8d:86:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:41:36.934375 containerd[1894]: 2026-01-13 23:41:36.924 [INFO][5168] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006" Namespace="calico-apiserver" Pod="calico-apiserver-7569cdf946-2r7qk" WorkloadEndpoint="ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--2r7qk-eth0" Jan 13 23:41:37.006514 kubelet[3516]: E0113 23:41:37.006319 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p84n5" podUID="e305c05b-4fdf-40a3-854a-8a106f493072" Jan 13 23:41:37.015683 containerd[1894]: time="2026-01-13T23:41:37.014694763Z" level=info msg="connecting to shim 74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006" address="unix:///run/containerd/s/41517217e355df9a2e849d183bb9d0628e1c7825796cc5773938215a154d2254" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:41:37.109251 systemd-networkd[1617]: cali602e8f38ba9: Link UP Jan 13 23:41:37.109735 systemd-networkd[1617]: cali602e8f38ba9: Gained carrier Jan 13 23:41:37.146000 audit[5255]: NETFILTER_CFG table=filter:133 family=2 entries=62 op=nft_register_chain pid=5255 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:41:37.146000 audit[5255]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=31772 a0=3 a1=ffffebc0af10 a2=0 a3=ffffa077bfa8 items=0 ppid=4650 pid=5255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:37.146000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:41:37.177817 containerd[1894]: 2026-01-13 23:41:36.680 [INFO][5163] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--127-k8s-goldmane--7c778bb748--xqbbk-eth0 goldmane-7c778bb748- calico-system 497e3e66-1726-45f4-a990-23061cc5868e 901 0 2026-01-13 23:41:07 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-24-127 goldmane-7c778bb748-xqbbk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali602e8f38ba9 [] [] }} ContainerID="9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4" Namespace="calico-system" Pod="goldmane-7c778bb748-xqbbk" WorkloadEndpoint="ip--172--31--24--127-k8s-goldmane--7c778bb748--xqbbk-" Jan 13 23:41:37.177817 containerd[1894]: 2026-01-13 23:41:36.681 [INFO][5163] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4" Namespace="calico-system" Pod="goldmane-7c778bb748-xqbbk" WorkloadEndpoint="ip--172--31--24--127-k8s-goldmane--7c778bb748--xqbbk-eth0" Jan 13 23:41:37.177817 containerd[1894]: 2026-01-13 23:41:36.766 [INFO][5207] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4" HandleID="k8s-pod-network.9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4" Workload="ip--172--31--24--127-k8s-goldmane--7c778bb748--xqbbk-eth0" Jan 13 23:41:37.177817 containerd[1894]: 2026-01-13 23:41:36.768 [INFO][5207] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4" HandleID="k8s-pod-network.9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4" Workload="ip--172--31--24--127-k8s-goldmane--7c778bb748--xqbbk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103950), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-127", "pod":"goldmane-7c778bb748-xqbbk", "timestamp":"2026-01-13 23:41:36.766424092 +0000 UTC"}, Hostname:"ip-172-31-24-127", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:41:37.177817 containerd[1894]: 2026-01-13 23:41:36.769 [INFO][5207] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:41:37.177817 containerd[1894]: 2026-01-13 23:41:36.846 [INFO][5207] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:41:37.177817 containerd[1894]: 2026-01-13 23:41:36.846 [INFO][5207] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-127' Jan 13 23:41:37.177817 containerd[1894]: 2026-01-13 23:41:36.895 [INFO][5207] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4" host="ip-172-31-24-127" Jan 13 23:41:37.177817 containerd[1894]: 2026-01-13 23:41:36.927 [INFO][5207] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-127" Jan 13 23:41:37.177817 containerd[1894]: 2026-01-13 23:41:36.955 [INFO][5207] ipam/ipam.go 511: Trying affinity for 192.168.115.0/26 host="ip-172-31-24-127" Jan 13 23:41:37.177817 containerd[1894]: 2026-01-13 23:41:36.974 [INFO][5207] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.0/26 host="ip-172-31-24-127" Jan 13 23:41:37.177817 containerd[1894]: 2026-01-13 23:41:36.986 [INFO][5207] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.0/26 host="ip-172-31-24-127" Jan 13 23:41:37.177817 containerd[1894]: 2026-01-13 23:41:36.986 [INFO][5207] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.0/26 handle="k8s-pod-network.9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4" host="ip-172-31-24-127" Jan 13 23:41:37.177817 containerd[1894]: 2026-01-13 23:41:36.996 [INFO][5207] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4 Jan 13 23:41:37.177817 containerd[1894]: 2026-01-13 23:41:37.018 [INFO][5207] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.0/26 handle="k8s-pod-network.9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4" host="ip-172-31-24-127" Jan 13 23:41:37.177817 containerd[1894]: 2026-01-13 23:41:37.061 [INFO][5207] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.6/26] block=192.168.115.0/26 handle="k8s-pod-network.9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4" host="ip-172-31-24-127" Jan 13 23:41:37.177817 containerd[1894]: 2026-01-13 23:41:37.061 [INFO][5207] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.6/26] handle="k8s-pod-network.9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4" host="ip-172-31-24-127" Jan 13 23:41:37.177817 containerd[1894]: 2026-01-13 23:41:37.064 [INFO][5207] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:41:37.177817 containerd[1894]: 2026-01-13 23:41:37.064 [INFO][5207] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.6/26] IPv6=[] ContainerID="9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4" HandleID="k8s-pod-network.9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4" Workload="ip--172--31--24--127-k8s-goldmane--7c778bb748--xqbbk-eth0" Jan 13 23:41:37.186506 containerd[1894]: 2026-01-13 23:41:37.084 [INFO][5163] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4" Namespace="calico-system" Pod="goldmane-7c778bb748-xqbbk" WorkloadEndpoint="ip--172--31--24--127-k8s-goldmane--7c778bb748--xqbbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--127-k8s-goldmane--7c778bb748--xqbbk-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"497e3e66-1726-45f4-a990-23061cc5868e", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 41, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-127", ContainerID:"", Pod:"goldmane-7c778bb748-xqbbk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.115.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali602e8f38ba9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:41:37.186506 containerd[1894]: 2026-01-13 23:41:37.084 [INFO][5163] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.6/32] ContainerID="9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4" Namespace="calico-system" Pod="goldmane-7c778bb748-xqbbk" WorkloadEndpoint="ip--172--31--24--127-k8s-goldmane--7c778bb748--xqbbk-eth0" Jan 13 23:41:37.186506 containerd[1894]: 2026-01-13 23:41:37.084 [INFO][5163] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali602e8f38ba9 ContainerID="9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4" Namespace="calico-system" Pod="goldmane-7c778bb748-xqbbk" WorkloadEndpoint="ip--172--31--24--127-k8s-goldmane--7c778bb748--xqbbk-eth0" Jan 13 23:41:37.186506 containerd[1894]: 2026-01-13 23:41:37.118 [INFO][5163] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4" Namespace="calico-system" Pod="goldmane-7c778bb748-xqbbk" WorkloadEndpoint="ip--172--31--24--127-k8s-goldmane--7c778bb748--xqbbk-eth0" Jan 13 23:41:37.186506 containerd[1894]: 2026-01-13 23:41:37.120 [INFO][5163] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4" Namespace="calico-system" Pod="goldmane-7c778bb748-xqbbk" WorkloadEndpoint="ip--172--31--24--127-k8s-goldmane--7c778bb748--xqbbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--127-k8s-goldmane--7c778bb748--xqbbk-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"497e3e66-1726-45f4-a990-23061cc5868e", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 41, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-127", ContainerID:"9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4", Pod:"goldmane-7c778bb748-xqbbk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.115.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali602e8f38ba9", MAC:"4a:1d:8c:4b:b7:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:41:37.186506 containerd[1894]: 2026-01-13 23:41:37.160 [INFO][5163] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4" Namespace="calico-system" Pod="goldmane-7c778bb748-xqbbk" WorkloadEndpoint="ip--172--31--24--127-k8s-goldmane--7c778bb748--xqbbk-eth0" Jan 13 23:41:37.178264 systemd[1]: Started cri-containerd-74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006.scope - libcontainer container 74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006. Jan 13 23:41:37.272002 containerd[1894]: time="2026-01-13T23:41:37.271862720Z" level=info msg="connecting to shim 9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4" address="unix:///run/containerd/s/06a9935f87c05a5dd2bb5ea6626c9e1c8b1d030b44326809c3403d9ab203dfd8" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:41:37.333428 systemd-networkd[1617]: cali3a00579a664: Link UP Jan 13 23:41:37.333844 systemd-networkd[1617]: cali3a00579a664: Gained carrier Jan 13 23:41:37.345338 systemd[1]: Started cri-containerd-9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4.scope - libcontainer container 9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4. Jan 13 23:41:37.407306 containerd[1894]: time="2026-01-13T23:41:37.405427058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mslr6,Uid:b87a7357-7c47-49ad-871f-3e101a102b84,Namespace:kube-system,Attempt:0,}" Jan 13 23:41:37.410934 containerd[1894]: 2026-01-13 23:41:36.666 [INFO][5159] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--hmfxf-eth0 calico-apiserver-7569cdf946- calico-apiserver 0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb 900 0 2026-01-13 23:40:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7569cdf946 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-127 calico-apiserver-7569cdf946-hmfxf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3a00579a664 [] [] }} ContainerID="076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4" Namespace="calico-apiserver" Pod="calico-apiserver-7569cdf946-hmfxf" WorkloadEndpoint="ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--hmfxf-" Jan 13 23:41:37.410934 containerd[1894]: 2026-01-13 23:41:36.668 [INFO][5159] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4" Namespace="calico-apiserver" Pod="calico-apiserver-7569cdf946-hmfxf" WorkloadEndpoint="ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--hmfxf-eth0" Jan 13 23:41:37.410934 containerd[1894]: 2026-01-13 23:41:36.798 [INFO][5205] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4" HandleID="k8s-pod-network.076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4" Workload="ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--hmfxf-eth0" Jan 13 23:41:37.410934 containerd[1894]: 2026-01-13 23:41:36.798 [INFO][5205] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4" HandleID="k8s-pod-network.076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4" Workload="ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--hmfxf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003723d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-24-127", "pod":"calico-apiserver-7569cdf946-hmfxf", "timestamp":"2026-01-13 23:41:36.798692006 +0000 UTC"}, Hostname:"ip-172-31-24-127", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:41:37.410934 containerd[1894]: 2026-01-13 23:41:36.799 [INFO][5205] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:41:37.410934 containerd[1894]: 2026-01-13 23:41:37.064 [INFO][5205] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:41:37.410934 containerd[1894]: 2026-01-13 23:41:37.064 [INFO][5205] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-127' Jan 13 23:41:37.410934 containerd[1894]: 2026-01-13 23:41:37.134 [INFO][5205] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4" host="ip-172-31-24-127" Jan 13 23:41:37.410934 containerd[1894]: 2026-01-13 23:41:37.183 [INFO][5205] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-127" Jan 13 23:41:37.410934 containerd[1894]: 2026-01-13 23:41:37.214 [INFO][5205] ipam/ipam.go 511: Trying affinity for 192.168.115.0/26 host="ip-172-31-24-127" Jan 13 23:41:37.410934 containerd[1894]: 2026-01-13 23:41:37.221 [INFO][5205] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.0/26 host="ip-172-31-24-127" Jan 13 23:41:37.410934 containerd[1894]: 2026-01-13 23:41:37.236 [INFO][5205] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.0/26 host="ip-172-31-24-127" Jan 13 23:41:37.410934 containerd[1894]: 2026-01-13 23:41:37.238 [INFO][5205] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.0/26 handle="k8s-pod-network.076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4" host="ip-172-31-24-127" Jan 13 23:41:37.410934 containerd[1894]: 2026-01-13 23:41:37.245 [INFO][5205] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4 Jan 13 23:41:37.410934 containerd[1894]: 2026-01-13 23:41:37.270 [INFO][5205] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.0/26 handle="k8s-pod-network.076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4" host="ip-172-31-24-127" Jan 13 23:41:37.410934 containerd[1894]: 2026-01-13 23:41:37.305 [INFO][5205] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.7/26] block=192.168.115.0/26 handle="k8s-pod-network.076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4" host="ip-172-31-24-127" Jan 13 23:41:37.410934 containerd[1894]: 2026-01-13 23:41:37.307 [INFO][5205] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.7/26] handle="k8s-pod-network.076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4" host="ip-172-31-24-127" Jan 13 23:41:37.410934 containerd[1894]: 2026-01-13 23:41:37.307 [INFO][5205] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:41:37.410934 containerd[1894]: 2026-01-13 23:41:37.307 [INFO][5205] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.7/26] IPv6=[] ContainerID="076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4" HandleID="k8s-pod-network.076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4" Workload="ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--hmfxf-eth0" Jan 13 23:41:37.414220 containerd[1894]: 2026-01-13 23:41:37.321 [INFO][5159] cni-plugin/k8s.go 418: Populated endpoint ContainerID="076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4" Namespace="calico-apiserver" Pod="calico-apiserver-7569cdf946-hmfxf" WorkloadEndpoint="ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--hmfxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--hmfxf-eth0", GenerateName:"calico-apiserver-7569cdf946-", Namespace:"calico-apiserver", SelfLink:"", UID:"0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 40, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7569cdf946", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-127", ContainerID:"", Pod:"calico-apiserver-7569cdf946-hmfxf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3a00579a664", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:41:37.414220 containerd[1894]: 2026-01-13 23:41:37.321 [INFO][5159] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.7/32] ContainerID="076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4" Namespace="calico-apiserver" Pod="calico-apiserver-7569cdf946-hmfxf" WorkloadEndpoint="ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--hmfxf-eth0" Jan 13 23:41:37.414220 containerd[1894]: 2026-01-13 23:41:37.321 [INFO][5159] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a00579a664 ContainerID="076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4" Namespace="calico-apiserver" Pod="calico-apiserver-7569cdf946-hmfxf" WorkloadEndpoint="ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--hmfxf-eth0" Jan 13 23:41:37.414220 containerd[1894]: 2026-01-13 23:41:37.333 [INFO][5159] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4" Namespace="calico-apiserver" Pod="calico-apiserver-7569cdf946-hmfxf" WorkloadEndpoint="ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--hmfxf-eth0" Jan 13 23:41:37.414220 containerd[1894]: 2026-01-13 23:41:37.335 [INFO][5159] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4" Namespace="calico-apiserver" Pod="calico-apiserver-7569cdf946-hmfxf" WorkloadEndpoint="ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--hmfxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--hmfxf-eth0", GenerateName:"calico-apiserver-7569cdf946-", Namespace:"calico-apiserver", SelfLink:"", UID:"0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 40, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7569cdf946", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-127", ContainerID:"076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4", Pod:"calico-apiserver-7569cdf946-hmfxf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3a00579a664", MAC:"9a:33:04:dc:17:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:41:37.414220 containerd[1894]: 2026-01-13 23:41:37.376 [INFO][5159] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4" Namespace="calico-apiserver" Pod="calico-apiserver-7569cdf946-hmfxf" WorkloadEndpoint="ip--172--31--24--127-k8s-calico--apiserver--7569cdf946--hmfxf-eth0" Jan 13 23:41:37.458000 audit[5332]: NETFILTER_CFG table=filter:134 family=2 entries=60 op=nft_register_chain pid=5332 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:41:37.458000 audit[5332]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29932 a0=3 a1=ffffd1434c90 a2=0 a3=ffff8f0f0fa8 items=0 ppid=4650 pid=5332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:37.458000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:41:37.522000 audit: BPF prog-id=237 op=LOAD Jan 13 23:41:37.525000 audit: BPF prog-id=238 op=LOAD Jan 13 23:41:37.525000 audit[5248]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5237 pid=5248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:37.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734373533373134666532303732356435366162653430386132313630 Jan 13 23:41:37.526000 audit: BPF prog-id=238 op=UNLOAD Jan 13 23:41:37.526000 audit[5248]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5237 pid=5248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:37.526000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734373533373134666532303732356435366162653430386132313630 Jan 13 23:41:37.526000 audit: BPF prog-id=239 op=LOAD Jan 13 23:41:37.526000 audit[5248]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5237 pid=5248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:37.526000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734373533373134666532303732356435366162653430386132313630 Jan 13 23:41:37.527000 audit: BPF prog-id=240 op=LOAD Jan 13 23:41:37.527000 audit[5248]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=5237 pid=5248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:37.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734373533373134666532303732356435366162653430386132313630 Jan 13 23:41:37.527000 audit: BPF prog-id=240 op=UNLOAD Jan 13 23:41:37.527000 audit[5248]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5237 pid=5248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:37.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734373533373134666532303732356435366162653430386132313630 Jan 13 23:41:37.527000 audit: BPF prog-id=239 op=UNLOAD Jan 13 23:41:37.527000 audit[5248]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5237 pid=5248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:37.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734373533373134666532303732356435366162653430386132313630 Jan 13 23:41:37.527000 audit: BPF prog-id=241 op=LOAD Jan 13 23:41:37.527000 audit[5248]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=5237 pid=5248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:37.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734373533373134666532303732356435366162653430386132313630 Jan 13 23:41:37.544393 containerd[1894]: time="2026-01-13T23:41:37.544320703Z" level=info msg="connecting to shim 076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4" address="unix:///run/containerd/s/93f4cbf14f9aadbd451ee63302d45e7363bef2275e3ecdb2ef819dd00b8aae7e" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:41:37.610000 audit: BPF prog-id=242 op=LOAD Jan 13 23:41:37.612000 audit: BPF prog-id=243 op=LOAD Jan 13 23:41:37.612000 audit[5300]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=5287 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:37.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964356636373038663931633334366638323062393662353334643662 Jan 13 23:41:37.612000 audit: BPF prog-id=243 op=UNLOAD Jan 13 23:41:37.612000 audit[5300]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5287 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:37.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964356636373038663931633334366638323062393662353334643662 Jan 13 23:41:37.613000 audit: BPF prog-id=244 op=LOAD Jan 13 23:41:37.613000 audit[5300]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=5287 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:37.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964356636373038663931633334366638323062393662353334643662 Jan 13 23:41:37.614000 audit: BPF prog-id=245 op=LOAD Jan 13 23:41:37.614000 audit[5300]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=5287 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:37.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964356636373038663931633334366638323062393662353334643662 Jan 13 23:41:37.615000 audit: BPF prog-id=245 op=UNLOAD Jan 13 23:41:37.615000 audit[5300]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5287 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:37.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964356636373038663931633334366638323062393662353334643662 Jan 13 23:41:37.615000 audit: BPF prog-id=244 op=UNLOAD Jan 13 23:41:37.615000 audit[5300]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5287 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:37.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964356636373038663931633334366638323062393662353334643662 Jan 13 23:41:37.615000 audit: BPF prog-id=246 op=LOAD Jan 13 23:41:37.615000 audit[5300]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=5287 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:37.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964356636373038663931633334366638323062393662353334643662 Jan 13 23:41:37.636192 systemd-networkd[1617]: cali87c33c051ba: Gained IPv6LL Jan 13 23:41:37.660371 systemd[1]: Started cri-containerd-076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4.scope - libcontainer container 076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4. Jan 13 23:41:37.734000 audit: BPF prog-id=247 op=LOAD Jan 13 23:41:37.739000 audit: BPF prog-id=248 op=LOAD Jan 13 23:41:37.739000 audit[5357]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=5345 pid=5357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:37.739000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037366237633333633134363861353164366137316632663334643662 Jan 13 23:41:37.739000 audit: BPF prog-id=248 op=UNLOAD Jan 13 23:41:37.739000 audit[5357]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5345 pid=5357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:37.739000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037366237633333633134363861353164366137316632663334643662 Jan 13 23:41:37.744000 audit: BPF prog-id=249 op=LOAD Jan 13 23:41:37.744000 audit[5357]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=5345 pid=5357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:37.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037366237633333633134363861353164366137316632663334643662 Jan 13 23:41:37.746000 audit: BPF prog-id=250 op=LOAD Jan 13 23:41:37.746000 audit[5357]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=5345 pid=5357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:37.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037366237633333633134363861353164366137316632663334643662 Jan 13 23:41:37.747000 audit: BPF prog-id=250 op=UNLOAD Jan 13 23:41:37.747000 audit[5357]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5345 pid=5357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:37.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037366237633333633134363861353164366137316632663334643662 Jan 13 23:41:37.748000 audit: BPF prog-id=249 op=UNLOAD Jan 13 23:41:37.748000 audit[5357]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5345 pid=5357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:37.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037366237633333633134363861353164366137316632663334643662 Jan 13 23:41:37.752000 audit: BPF prog-id=251 op=LOAD Jan 13 23:41:37.752000 audit[5357]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=5345 pid=5357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:37.752000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037366237633333633134363861353164366137316632663334643662 Jan 13 23:41:37.807000 audit[5375]: NETFILTER_CFG table=filter:135 family=2 entries=57 op=nft_register_chain pid=5375 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:41:37.807000 audit[5375]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27828 a0=3 a1=ffffc793ce90 a2=0 a3=ffff824e4fa8 items=0 ppid=4650 pid=5375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:37.807000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:41:37.827637 containerd[1894]: time="2026-01-13T23:41:37.827575333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7569cdf946-2r7qk,Uid:bc4e2f94-7e3c-446b-9bce-55e8c6abc38d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"74753714fe20725d56abe408a216035f5aeb29e0e51930d455651778a0d8f006\"" Jan 13 23:41:37.833923 containerd[1894]: time="2026-01-13T23:41:37.833384869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:41:37.939503 systemd-networkd[1617]: calid9dc406a16c: Link UP Jan 13 23:41:37.943551 systemd-networkd[1617]: calid9dc406a16c: Gained carrier Jan 13 23:41:37.994927 containerd[1894]: time="2026-01-13T23:41:37.994629472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xqbbk,Uid:497e3e66-1726-45f4-a990-23061cc5868e,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d5f6708f91c346f820b96b534d6ba24c19349e2ed7904cdf74deb3bc31724e4\"" Jan 13 23:41:38.008168 containerd[1894]: 2026-01-13 23:41:37.678 [INFO][5331] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--127-k8s-coredns--66bc5c9577--mslr6-eth0 coredns-66bc5c9577- kube-system b87a7357-7c47-49ad-871f-3e101a102b84 888 0 2026-01-13 23:40:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-127 coredns-66bc5c9577-mslr6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid9dc406a16c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5" Namespace="kube-system" Pod="coredns-66bc5c9577-mslr6" WorkloadEndpoint="ip--172--31--24--127-k8s-coredns--66bc5c9577--mslr6-" Jan 13 23:41:38.008168 containerd[1894]: 2026-01-13 23:41:37.680 [INFO][5331] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5" Namespace="kube-system" Pod="coredns-66bc5c9577-mslr6" WorkloadEndpoint="ip--172--31--24--127-k8s-coredns--66bc5c9577--mslr6-eth0" Jan 13 23:41:38.008168 containerd[1894]: 2026-01-13 23:41:37.791 [INFO][5377] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5" HandleID="k8s-pod-network.6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5" Workload="ip--172--31--24--127-k8s-coredns--66bc5c9577--mslr6-eth0" Jan 13 23:41:38.008168 containerd[1894]: 2026-01-13 23:41:37.791 [INFO][5377] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5" HandleID="k8s-pod-network.6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5" Workload="ip--172--31--24--127-k8s-coredns--66bc5c9577--mslr6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dde0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-127", "pod":"coredns-66bc5c9577-mslr6", "timestamp":"2026-01-13 23:41:37.791269922 +0000 UTC"}, Hostname:"ip-172-31-24-127", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:41:38.008168 containerd[1894]: 2026-01-13 23:41:37.793 [INFO][5377] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:41:38.008168 containerd[1894]: 2026-01-13 23:41:37.793 [INFO][5377] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:41:38.008168 containerd[1894]: 2026-01-13 23:41:37.795 [INFO][5377] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-127' Jan 13 23:41:38.008168 containerd[1894]: 2026-01-13 23:41:37.828 [INFO][5377] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5" host="ip-172-31-24-127" Jan 13 23:41:38.008168 containerd[1894]: 2026-01-13 23:41:37.846 [INFO][5377] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-127" Jan 13 23:41:38.008168 containerd[1894]: 2026-01-13 23:41:37.857 [INFO][5377] ipam/ipam.go 511: Trying affinity for 192.168.115.0/26 host="ip-172-31-24-127" Jan 13 23:41:38.008168 containerd[1894]: 2026-01-13 23:41:37.862 [INFO][5377] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.0/26 host="ip-172-31-24-127" Jan 13 23:41:38.008168 containerd[1894]: 2026-01-13 23:41:37.871 [INFO][5377] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.0/26 host="ip-172-31-24-127" Jan 13 23:41:38.008168 containerd[1894]: 2026-01-13 23:41:37.872 [INFO][5377] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.0/26 handle="k8s-pod-network.6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5" host="ip-172-31-24-127" Jan 13 23:41:38.008168 containerd[1894]: 2026-01-13 23:41:37.880 [INFO][5377] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5 Jan 13 23:41:38.008168 containerd[1894]: 2026-01-13 23:41:37.898 [INFO][5377] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.0/26 handle="k8s-pod-network.6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5" host="ip-172-31-24-127" Jan 13 23:41:38.008168 containerd[1894]: 2026-01-13 23:41:37.922 [INFO][5377] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.8/26] block=192.168.115.0/26 handle="k8s-pod-network.6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5" host="ip-172-31-24-127" Jan 13 23:41:38.008168 containerd[1894]: 2026-01-13 23:41:37.922 [INFO][5377] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.8/26] handle="k8s-pod-network.6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5" host="ip-172-31-24-127" Jan 13 23:41:38.008168 containerd[1894]: 2026-01-13 23:41:37.923 [INFO][5377] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:41:38.008168 containerd[1894]: 2026-01-13 23:41:37.923 [INFO][5377] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.8/26] IPv6=[] ContainerID="6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5" HandleID="k8s-pod-network.6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5" Workload="ip--172--31--24--127-k8s-coredns--66bc5c9577--mslr6-eth0" Jan 13 23:41:38.011070 containerd[1894]: 2026-01-13 23:41:37.931 [INFO][5331] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5" Namespace="kube-system" Pod="coredns-66bc5c9577-mslr6" WorkloadEndpoint="ip--172--31--24--127-k8s-coredns--66bc5c9577--mslr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--127-k8s-coredns--66bc5c9577--mslr6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b87a7357-7c47-49ad-871f-3e101a102b84", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 40, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-127", ContainerID:"", Pod:"coredns-66bc5c9577-mslr6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid9dc406a16c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:41:38.011070 containerd[1894]: 2026-01-13 23:41:37.931 [INFO][5331] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.8/32] ContainerID="6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5" Namespace="kube-system" Pod="coredns-66bc5c9577-mslr6" WorkloadEndpoint="ip--172--31--24--127-k8s-coredns--66bc5c9577--mslr6-eth0" Jan 13 23:41:38.011070 containerd[1894]: 2026-01-13 23:41:37.931 [INFO][5331] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid9dc406a16c ContainerID="6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5" Namespace="kube-system" Pod="coredns-66bc5c9577-mslr6" WorkloadEndpoint="ip--172--31--24--127-k8s-coredns--66bc5c9577--mslr6-eth0" Jan 13 23:41:38.011070 containerd[1894]: 2026-01-13 23:41:37.945 [INFO][5331] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5" Namespace="kube-system" Pod="coredns-66bc5c9577-mslr6" WorkloadEndpoint="ip--172--31--24--127-k8s-coredns--66bc5c9577--mslr6-eth0" Jan 13 23:41:38.011070 containerd[1894]: 2026-01-13 23:41:37.947 [INFO][5331] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5" Namespace="kube-system" Pod="coredns-66bc5c9577-mslr6" WorkloadEndpoint="ip--172--31--24--127-k8s-coredns--66bc5c9577--mslr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--127-k8s-coredns--66bc5c9577--mslr6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b87a7357-7c47-49ad-871f-3e101a102b84", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 40, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-127", ContainerID:"6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5", Pod:"coredns-66bc5c9577-mslr6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid9dc406a16c", MAC:"56:9e:f8:2e:5c:47", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:41:38.011070 containerd[1894]: 2026-01-13 23:41:37.983 [INFO][5331] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5" Namespace="kube-system" Pod="coredns-66bc5c9577-mslr6" WorkloadEndpoint="ip--172--31--24--127-k8s-coredns--66bc5c9577--mslr6-eth0" Jan 13 23:41:38.026426 containerd[1894]: time="2026-01-13T23:41:38.026062645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7569cdf946-hmfxf,Uid:0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"076b7c33c1468a51d6a71f2f34d6bb783f53896f10127508ec61d4281b2a71f4\"" Jan 13 23:41:38.046609 kubelet[3516]: E0113 23:41:38.046298 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p84n5" podUID="e305c05b-4fdf-40a3-854a-8a106f493072" Jan 13 23:41:38.115079 containerd[1894]: time="2026-01-13T23:41:38.114232921Z" level=info msg="connecting to shim 6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5" address="unix:///run/containerd/s/514170638d12d36bf6fbf21ab1cd4b5636195b0884d6cdc21dd9e411b490b5ad" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:41:38.121559 containerd[1894]: time="2026-01-13T23:41:38.121097783Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:41:38.126231 containerd[1894]: time="2026-01-13T23:41:38.126129798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:41:38.126688 containerd[1894]: time="2026-01-13T23:41:38.126569518Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:41:38.129535 kubelet[3516]: E0113 23:41:38.128487 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:41:38.129535 kubelet[3516]: E0113 23:41:38.128559 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:41:38.129535 kubelet[3516]: E0113 23:41:38.128805 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7569cdf946-2r7qk_calico-apiserver(bc4e2f94-7e3c-446b-9bce-55e8c6abc38d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:41:38.129535 kubelet[3516]: E0113 23:41:38.128858 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7569cdf946-2r7qk" podUID="bc4e2f94-7e3c-446b-9bce-55e8c6abc38d" Jan 13 23:41:38.129878 containerd[1894]: time="2026-01-13T23:41:38.129710912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 13 23:41:38.146000 audit[5426]: NETFILTER_CFG table=filter:136 family=2 entries=56 op=nft_register_chain pid=5426 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:41:38.146000 audit[5426]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25096 a0=3 a1=fffff1535160 a2=0 a3=ffff8488dfa8 items=0 ppid=4650 pid=5426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:38.146000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:41:38.196281 systemd[1]: Started cri-containerd-6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5.scope - libcontainer container 6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5. Jan 13 23:41:38.235000 audit: BPF prog-id=252 op=LOAD Jan 13 23:41:38.236000 audit: BPF prog-id=253 op=LOAD Jan 13 23:41:38.236000 audit[5433]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5421 pid=5433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:38.236000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664623934633438353863383464383264616639383135353236643731 Jan 13 23:41:38.236000 audit: BPF prog-id=253 op=UNLOAD Jan 13 23:41:38.236000 audit[5433]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5421 pid=5433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:38.236000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664623934633438353863383464383264616639383135353236643731 Jan 13 23:41:38.236000 audit: BPF prog-id=254 op=LOAD Jan 13 23:41:38.236000 audit[5433]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5421 pid=5433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:38.236000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664623934633438353863383464383264616639383135353236643731 Jan 13 23:41:38.236000 audit: BPF prog-id=255 op=LOAD Jan 13 23:41:38.236000 audit[5433]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5421 pid=5433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:38.236000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664623934633438353863383464383264616639383135353236643731 Jan 13 23:41:38.237000 audit: BPF prog-id=255 op=UNLOAD Jan 13 23:41:38.237000 audit[5433]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5421 pid=5433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:38.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664623934633438353863383464383264616639383135353236643731 Jan 13 23:41:38.237000 audit: BPF prog-id=254 op=UNLOAD Jan 13 23:41:38.237000 audit[5433]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5421 pid=5433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:38.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664623934633438353863383464383264616639383135353236643731 Jan 13 23:41:38.237000 audit: BPF prog-id=256 op=LOAD Jan 13 23:41:38.237000 audit[5433]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5421 pid=5433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:38.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664623934633438353863383464383264616639383135353236643731 Jan 13 23:41:38.312876 containerd[1894]: time="2026-01-13T23:41:38.312758567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mslr6,Uid:b87a7357-7c47-49ad-871f-3e101a102b84,Namespace:kube-system,Attempt:0,} returns sandbox id \"6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5\"" Jan 13 23:41:38.322682 containerd[1894]: time="2026-01-13T23:41:38.322467471Z" level=info msg="CreateContainer within sandbox \"6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 23:41:38.336009 containerd[1894]: time="2026-01-13T23:41:38.334872179Z" level=info msg="Container 1e9fdeace507fd5715a4b65c2a12ad33634d342138ed14b1d86952a9c8df7c19: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:41:38.342658 containerd[1894]: time="2026-01-13T23:41:38.342572923Z" level=info msg="CreateContainer within sandbox \"6db94c4858c84d82daf9815526d716204b9e4c1c895220c0471a3e30cd73d7c5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1e9fdeace507fd5715a4b65c2a12ad33634d342138ed14b1d86952a9c8df7c19\"" Jan 13 23:41:38.345678 containerd[1894]: time="2026-01-13T23:41:38.344769553Z" level=info msg="StartContainer for \"1e9fdeace507fd5715a4b65c2a12ad33634d342138ed14b1d86952a9c8df7c19\"" Jan 13 23:41:38.346816 containerd[1894]: time="2026-01-13T23:41:38.346756534Z" level=info msg="connecting to shim 1e9fdeace507fd5715a4b65c2a12ad33634d342138ed14b1d86952a9c8df7c19" address="unix:///run/containerd/s/514170638d12d36bf6fbf21ab1cd4b5636195b0884d6cdc21dd9e411b490b5ad" protocol=ttrpc version=3 Jan 13 23:41:38.385253 systemd[1]: Started cri-containerd-1e9fdeace507fd5715a4b65c2a12ad33634d342138ed14b1d86952a9c8df7c19.scope - libcontainer container 1e9fdeace507fd5715a4b65c2a12ad33634d342138ed14b1d86952a9c8df7c19. Jan 13 23:41:38.404241 systemd-networkd[1617]: cali3a00579a664: Gained IPv6LL Jan 13 23:41:38.425512 containerd[1894]: time="2026-01-13T23:41:38.425450138Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:41:38.427185 containerd[1894]: time="2026-01-13T23:41:38.427128660Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 13 23:41:38.427671 containerd[1894]: time="2026-01-13T23:41:38.427185989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 13 23:41:38.428955 kubelet[3516]: E0113 23:41:38.427831 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:41:38.428955 kubelet[3516]: E0113 23:41:38.427932 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:41:38.428955 kubelet[3516]: E0113 23:41:38.428211 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xqbbk_calico-system(497e3e66-1726-45f4-a990-23061cc5868e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 13 23:41:38.428955 kubelet[3516]: E0113 23:41:38.428264 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xqbbk" podUID="497e3e66-1726-45f4-a990-23061cc5868e" Jan 13 23:41:38.430800 containerd[1894]: time="2026-01-13T23:41:38.429612954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:41:38.442000 audit: BPF prog-id=257 op=LOAD Jan 13 23:41:38.444000 audit: BPF prog-id=258 op=LOAD Jan 13 23:41:38.444000 audit[5463]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5421 pid=5463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:38.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165396664656163653530376664353731356134623635633261313261 Jan 13 23:41:38.445000 audit: BPF prog-id=258 op=UNLOAD Jan 13 23:41:38.445000 audit[5463]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5421 pid=5463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:38.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165396664656163653530376664353731356134623635633261313261 Jan 13 23:41:38.446000 audit: BPF prog-id=259 op=LOAD Jan 13 23:41:38.446000 audit[5463]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5421 pid=5463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:38.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165396664656163653530376664353731356134623635633261313261 Jan 13 23:41:38.446000 audit: BPF prog-id=260 op=LOAD Jan 13 23:41:38.446000 audit[5463]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5421 pid=5463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:38.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165396664656163653530376664353731356134623635633261313261 Jan 13 23:41:38.447000 audit: BPF prog-id=260 op=UNLOAD Jan 13 23:41:38.447000 audit[5463]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5421 pid=5463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:38.447000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165396664656163653530376664353731356134623635633261313261 Jan 13 23:41:38.448000 audit: BPF prog-id=259 op=UNLOAD Jan 13 23:41:38.448000 audit[5463]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5421 pid=5463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:38.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165396664656163653530376664353731356134623635633261313261 Jan 13 23:41:38.448000 audit: BPF prog-id=261 op=LOAD Jan 13 23:41:38.448000 audit[5463]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5421 pid=5463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:38.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165396664656163653530376664353731356134623635633261313261 Jan 13 23:41:38.507926 containerd[1894]: time="2026-01-13T23:41:38.507845335Z" level=info msg="StartContainer for \"1e9fdeace507fd5715a4b65c2a12ad33634d342138ed14b1d86952a9c8df7c19\" returns successfully" Jan 13 23:41:38.706206 containerd[1894]: time="2026-01-13T23:41:38.705865397Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:41:38.707417 containerd[1894]: time="2026-01-13T23:41:38.707265417Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:41:38.707417 containerd[1894]: time="2026-01-13T23:41:38.707286691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:41:38.707659 kubelet[3516]: E0113 23:41:38.707599 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:41:38.707770 kubelet[3516]: E0113 23:41:38.707657 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:41:38.708454 kubelet[3516]: E0113 23:41:38.708385 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7569cdf946-hmfxf_calico-apiserver(0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:41:38.708605 kubelet[3516]: E0113 23:41:38.708493 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7569cdf946-hmfxf" podUID="0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb" Jan 13 23:41:38.724205 systemd-networkd[1617]: cali602e8f38ba9: Gained IPv6LL Jan 13 23:41:38.788147 systemd-networkd[1617]: cali53170591cfd: Gained IPv6LL Jan 13 23:41:39.045872 kubelet[3516]: E0113 23:41:39.045666 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7569cdf946-hmfxf" podUID="0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb" Jan 13 23:41:39.055300 kubelet[3516]: E0113 23:41:39.055221 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7569cdf946-2r7qk" podUID="bc4e2f94-7e3c-446b-9bce-55e8c6abc38d" Jan 13 23:41:39.056348 kubelet[3516]: E0113 23:41:39.055158 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xqbbk" podUID="497e3e66-1726-45f4-a990-23061cc5868e" Jan 13 23:41:39.194000 audit[5504]: NETFILTER_CFG table=filter:137 family=2 entries=20 op=nft_register_rule pid=5504 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:39.194000 audit[5504]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffdd98aa70 a2=0 a3=1 items=0 ppid=3663 pid=5504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:39.194000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:39.199000 audit[5504]: NETFILTER_CFG table=nat:138 family=2 entries=14 op=nft_register_rule pid=5504 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:39.199000 audit[5504]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffdd98aa70 a2=0 a3=1 items=0 ppid=3663 pid=5504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:39.199000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:39.236508 systemd-networkd[1617]: calid9dc406a16c: Gained IPv6LL Jan 13 23:41:39.262838 kubelet[3516]: I0113 23:41:39.262742 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mslr6" podStartSLOduration=62.262719385 podStartE2EDuration="1m2.262719385s" podCreationTimestamp="2026-01-13 23:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:41:39.166961977 +0000 UTC m=+67.042988361" watchObservedRunningTime="2026-01-13 23:41:39.262719385 +0000 UTC m=+67.138745733" Jan 13 23:41:40.060459 kubelet[3516]: E0113 23:41:40.059764 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7569cdf946-hmfxf" podUID="0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb" Jan 13 23:41:40.259000 audit[5506]: NETFILTER_CFG table=filter:139 family=2 entries=17 op=nft_register_rule pid=5506 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:40.262238 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 13 23:41:40.262426 kernel: audit: type=1325 audit(1768347700.259:750): table=filter:139 family=2 entries=17 op=nft_register_rule pid=5506 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:40.259000 audit[5506]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffa47feb0 a2=0 a3=1 items=0 ppid=3663 pid=5506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:40.277130 kernel: audit: type=1300 audit(1768347700.259:750): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffa47feb0 a2=0 a3=1 items=0 ppid=3663 pid=5506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:40.259000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:40.284355 kernel: audit: type=1327 audit(1768347700.259:750): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:40.284000 audit[5506]: NETFILTER_CFG table=nat:140 family=2 entries=35 op=nft_register_chain pid=5506 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:40.290440 kernel: audit: type=1325 audit(1768347700.284:751): table=nat:140 family=2 entries=35 op=nft_register_chain pid=5506 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:40.284000 audit[5506]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffffa47feb0 a2=0 a3=1 items=0 ppid=3663 pid=5506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:40.300327 kernel: audit: type=1300 audit(1768347700.284:751): arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffffa47feb0 a2=0 a3=1 items=0 ppid=3663 pid=5506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:40.284000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:40.304939 kernel: audit: type=1327 audit(1768347700.284:751): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:40.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.24.127:22-20.161.92.111:57640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:40.350532 systemd[1]: Started sshd@7-172.31.24.127:22-20.161.92.111:57640.service - OpenSSH per-connection server daemon (20.161.92.111:57640). Jan 13 23:41:40.364135 kernel: audit: type=1130 audit(1768347700.349:752): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.24.127:22-20.161.92.111:57640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:40.880000 audit[5511]: USER_ACCT pid=5511 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:40.890945 sshd[5511]: Accepted publickey for core from 20.161.92.111 port 57640 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:41:40.893183 sshd-session[5511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:41:40.889000 audit[5511]: CRED_ACQ pid=5511 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:40.901388 kernel: audit: type=1101 audit(1768347700.880:753): pid=5511 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:40.901541 kernel: audit: type=1103 audit(1768347700.889:754): pid=5511 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:40.906752 kernel: audit: type=1006 audit(1768347700.890:755): pid=5511 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 13 23:41:40.890000 audit[5511]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd7a3d3b0 a2=3 a3=0 items=0 ppid=1 pid=5511 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:40.890000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:41:40.918756 systemd-logind[1857]: New session 9 of user core. Jan 13 23:41:40.925297 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 23:41:40.940000 audit[5511]: USER_START pid=5511 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:40.947000 audit[5515]: CRED_ACQ pid=5515 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:41.421870 sshd[5515]: Connection closed by 20.161.92.111 port 57640 Jan 13 23:41:41.421722 sshd-session[5511]: pam_unix(sshd:session): session closed for user core Jan 13 23:41:41.424000 audit[5511]: USER_END pid=5511 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:41.425000 audit[5511]: CRED_DISP pid=5511 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:41.433006 systemd[1]: sshd@7-172.31.24.127:22-20.161.92.111:57640.service: Deactivated successfully. Jan 13 23:41:41.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.24.127:22-20.161.92.111:57640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:41.443091 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 23:41:41.450884 systemd-logind[1857]: Session 9 logged out. Waiting for processes to exit. Jan 13 23:41:41.455797 systemd-logind[1857]: Removed session 9. Jan 13 23:41:41.575083 ntpd[1846]: Listen normally on 6 vxlan.calico 192.168.115.0:123 Jan 13 23:41:41.576077 ntpd[1846]: 13 Jan 23:41:41 ntpd[1846]: Listen normally on 6 vxlan.calico 192.168.115.0:123 Jan 13 23:41:41.576077 ntpd[1846]: 13 Jan 23:41:41 ntpd[1846]: Listen normally on 7 vxlan.calico [fe80::644d:d8ff:fe22:86ff%4]:123 Jan 13 23:41:41.576077 ntpd[1846]: 13 Jan 23:41:41 ntpd[1846]: Listen normally on 8 calif66d11f0612 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 13 23:41:41.576077 ntpd[1846]: 13 Jan 23:41:41 ntpd[1846]: Listen normally on 9 calia8ad2c80264 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 13 23:41:41.576077 ntpd[1846]: 13 Jan 23:41:41 ntpd[1846]: Listen normally on 10 cali7c1d19f53b3 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 13 23:41:41.576077 ntpd[1846]: 13 Jan 23:41:41 ntpd[1846]: Listen normally on 11 cali87c33c051ba [fe80::ecee:eeff:feee:eeee%10]:123 Jan 13 23:41:41.576077 ntpd[1846]: 13 Jan 23:41:41 ntpd[1846]: Listen normally on 12 cali53170591cfd [fe80::ecee:eeff:feee:eeee%11]:123 Jan 13 23:41:41.576077 ntpd[1846]: 13 Jan 23:41:41 ntpd[1846]: Listen normally on 13 cali602e8f38ba9 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 13 23:41:41.576077 ntpd[1846]: 13 Jan 23:41:41 ntpd[1846]: Listen normally on 14 cali3a00579a664 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 13 23:41:41.576077 ntpd[1846]: 13 Jan 23:41:41 ntpd[1846]: Listen normally on 15 calid9dc406a16c [fe80::ecee:eeff:feee:eeee%14]:123 Jan 13 23:41:41.575174 ntpd[1846]: Listen normally on 7 vxlan.calico [fe80::644d:d8ff:fe22:86ff%4]:123 Jan 13 23:41:41.575226 ntpd[1846]: Listen normally on 8 calif66d11f0612 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 13 23:41:41.575274 ntpd[1846]: Listen normally on 9 calia8ad2c80264 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 13 23:41:41.575322 ntpd[1846]: Listen normally on 10 cali7c1d19f53b3 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 13 23:41:41.575367 ntpd[1846]: Listen normally on 11 cali87c33c051ba [fe80::ecee:eeff:feee:eeee%10]:123 Jan 13 23:41:41.575411 ntpd[1846]: Listen normally on 12 cali53170591cfd [fe80::ecee:eeff:feee:eeee%11]:123 Jan 13 23:41:41.575457 ntpd[1846]: Listen normally on 13 cali602e8f38ba9 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 13 23:41:41.575512 ntpd[1846]: Listen normally on 14 cali3a00579a664 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 13 23:41:41.575560 ntpd[1846]: Listen normally on 15 calid9dc406a16c [fe80::ecee:eeff:feee:eeee%14]:123 Jan 13 23:41:46.047000 audit[5537]: NETFILTER_CFG table=filter:141 family=2 entries=14 op=nft_register_rule pid=5537 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:46.049860 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 13 23:41:46.050014 kernel: audit: type=1325 audit(1768347706.047:761): table=filter:141 family=2 entries=14 op=nft_register_rule pid=5537 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:46.047000 audit[5537]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffea9da660 a2=0 a3=1 items=0 ppid=3663 pid=5537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:46.059797 kernel: audit: type=1300 audit(1768347706.047:761): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffea9da660 a2=0 a3=1 items=0 ppid=3663 pid=5537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:46.060148 kernel: audit: type=1327 audit(1768347706.047:761): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:46.047000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:46.123000 audit[5537]: NETFILTER_CFG table=nat:142 family=2 entries=56 op=nft_register_chain pid=5537 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:46.123000 audit[5537]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffea9da660 a2=0 a3=1 items=0 ppid=3663 pid=5537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:46.137467 kernel: audit: type=1325 audit(1768347706.123:762): table=nat:142 family=2 entries=56 op=nft_register_chain pid=5537 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:41:46.137608 kernel: audit: type=1300 audit(1768347706.123:762): arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffea9da660 a2=0 a3=1 items=0 ppid=3663 pid=5537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:46.123000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:46.141191 kernel: audit: type=1327 audit(1768347706.123:762): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:41:46.523208 systemd[1]: Started sshd@8-172.31.24.127:22-20.161.92.111:41346.service - OpenSSH per-connection server daemon (20.161.92.111:41346). Jan 13 23:41:46.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.24.127:22-20.161.92.111:41346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.531990 kernel: audit: type=1130 audit(1768347706.523:763): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.24.127:22-20.161.92.111:41346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:46.999000 audit[5541]: USER_ACCT pid=5541 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:47.001335 sshd[5541]: Accepted publickey for core from 20.161.92.111 port 41346 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:41:47.008971 kernel: audit: type=1101 audit(1768347706.999:764): pid=5541 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:47.008000 audit[5541]: CRED_ACQ pid=5541 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:47.014809 sshd-session[5541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:41:47.019879 kernel: audit: type=1103 audit(1768347707.008:765): pid=5541 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:47.020117 kernel: audit: type=1006 audit(1768347707.008:766): pid=5541 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 13 23:41:47.008000 audit[5541]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff70d7550 a2=3 a3=0 items=0 ppid=1 pid=5541 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:47.008000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:41:47.031568 systemd-logind[1857]: New session 10 of user core. Jan 13 23:41:47.041326 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 23:41:47.051000 audit[5541]: USER_START pid=5541 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:47.056000 audit[5545]: CRED_ACQ pid=5545 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:47.406105 sshd[5545]: Connection closed by 20.161.92.111 port 41346 Jan 13 23:41:47.406286 sshd-session[5541]: pam_unix(sshd:session): session closed for user core Jan 13 23:41:47.412000 audit[5541]: USER_END pid=5541 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:47.413000 audit[5541]: CRED_DISP pid=5541 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:47.419323 systemd[1]: sshd@8-172.31.24.127:22-20.161.92.111:41346.service: Deactivated successfully. Jan 13 23:41:47.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.24.127:22-20.161.92.111:41346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:47.426529 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 23:41:47.430472 systemd-logind[1857]: Session 10 logged out. Waiting for processes to exit. Jan 13 23:41:47.437222 systemd-logind[1857]: Removed session 10. Jan 13 23:41:48.397956 containerd[1894]: time="2026-01-13T23:41:48.397790233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 13 23:41:48.664694 containerd[1894]: time="2026-01-13T23:41:48.664532532Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:41:48.665941 containerd[1894]: time="2026-01-13T23:41:48.665749388Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 13 23:41:48.666431 containerd[1894]: time="2026-01-13T23:41:48.665834762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 13 23:41:48.666550 kubelet[3516]: E0113 23:41:48.666463 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:41:48.667133 kubelet[3516]: E0113 23:41:48.666552 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:41:48.667133 kubelet[3516]: E0113 23:41:48.666764 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6488775c94-xpm9n_calico-system(c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 13 23:41:48.667133 kubelet[3516]: E0113 23:41:48.666826 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6488775c94-xpm9n" podUID="c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a" Jan 13 23:41:48.668070 containerd[1894]: time="2026-01-13T23:41:48.667971938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 13 23:41:48.963576 containerd[1894]: time="2026-01-13T23:41:48.963287436Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:41:48.964386 containerd[1894]: time="2026-01-13T23:41:48.964244855Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 13 23:41:48.964386 containerd[1894]: time="2026-01-13T23:41:48.964314106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 13 23:41:48.964691 kubelet[3516]: E0113 23:41:48.964543 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:41:48.964691 kubelet[3516]: E0113 23:41:48.964600 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:41:48.964826 kubelet[3516]: E0113 23:41:48.964711 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-d477b98f6-wrppl_calico-system(999f0f43-0933-4443-a84f-03be4dcf7cf6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 13 23:41:48.966605 containerd[1894]: time="2026-01-13T23:41:48.966164434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 13 23:41:49.244624 containerd[1894]: time="2026-01-13T23:41:49.244375800Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:41:49.245563 containerd[1894]: time="2026-01-13T23:41:49.245508735Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 13 23:41:49.245931 containerd[1894]: time="2026-01-13T23:41:49.245575848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 13 23:41:49.246390 kubelet[3516]: E0113 23:41:49.246339 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:41:49.246736 kubelet[3516]: E0113 23:41:49.246604 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:41:49.247105 kubelet[3516]: E0113 23:41:49.246890 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-d477b98f6-wrppl_calico-system(999f0f43-0933-4443-a84f-03be4dcf7cf6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 13 23:41:49.247339 kubelet[3516]: E0113 23:41:49.247277 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d477b98f6-wrppl" podUID="999f0f43-0933-4443-a84f-03be4dcf7cf6" Jan 13 23:41:49.397015 containerd[1894]: time="2026-01-13T23:41:49.396941638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 13 23:41:49.680627 containerd[1894]: time="2026-01-13T23:41:49.680336463Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:41:49.682226 containerd[1894]: time="2026-01-13T23:41:49.681661853Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 13 23:41:49.682226 containerd[1894]: time="2026-01-13T23:41:49.681682311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 13 23:41:49.682541 kubelet[3516]: E0113 23:41:49.682451 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:41:49.684398 kubelet[3516]: E0113 23:41:49.682597 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:41:49.684398 kubelet[3516]: E0113 23:41:49.682755 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xqbbk_calico-system(497e3e66-1726-45f4-a990-23061cc5868e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 13 23:41:49.684398 kubelet[3516]: E0113 23:41:49.682843 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xqbbk" podUID="497e3e66-1726-45f4-a990-23061cc5868e" Jan 13 23:41:51.396342 containerd[1894]: time="2026-01-13T23:41:51.396259087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:41:51.657089 containerd[1894]: time="2026-01-13T23:41:51.656847662Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:41:51.658527 containerd[1894]: time="2026-01-13T23:41:51.658438025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:41:51.658708 containerd[1894]: time="2026-01-13T23:41:51.658583682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:41:51.659072 kubelet[3516]: E0113 23:41:51.658972 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:41:51.659072 kubelet[3516]: E0113 23:41:51.659061 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:41:51.661163 kubelet[3516]: E0113 23:41:51.659188 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7569cdf946-2r7qk_calico-apiserver(bc4e2f94-7e3c-446b-9bce-55e8c6abc38d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:41:51.661163 kubelet[3516]: E0113 23:41:51.659243 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7569cdf946-2r7qk" podUID="bc4e2f94-7e3c-446b-9bce-55e8c6abc38d" Jan 13 23:41:52.399387 containerd[1894]: time="2026-01-13T23:41:52.399172164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 13 23:41:52.515956 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 13 23:41:52.516110 kernel: audit: type=1130 audit(1768347712.510:772): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.24.127:22-20.161.92.111:56428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:52.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.24.127:22-20.161.92.111:56428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:52.511358 systemd[1]: Started sshd@9-172.31.24.127:22-20.161.92.111:56428.service - OpenSSH per-connection server daemon (20.161.92.111:56428). Jan 13 23:41:52.661352 containerd[1894]: time="2026-01-13T23:41:52.661199766Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:41:52.662945 containerd[1894]: time="2026-01-13T23:41:52.662781821Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 13 23:41:52.663272 containerd[1894]: time="2026-01-13T23:41:52.663167802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 13 23:41:52.664166 kubelet[3516]: E0113 23:41:52.664091 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:41:52.664750 kubelet[3516]: E0113 23:41:52.664168 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:41:52.664750 kubelet[3516]: E0113 23:41:52.664433 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-p84n5_calico-system(e305c05b-4fdf-40a3-854a-8a106f493072): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 13 23:41:52.668412 containerd[1894]: time="2026-01-13T23:41:52.668067703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:41:52.941110 containerd[1894]: time="2026-01-13T23:41:52.940963472Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:41:52.942586 containerd[1894]: time="2026-01-13T23:41:52.942394323Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:41:52.942586 containerd[1894]: time="2026-01-13T23:41:52.942474367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:41:52.943183 kubelet[3516]: E0113 23:41:52.943091 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:41:52.943183 kubelet[3516]: E0113 23:41:52.943169 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:41:52.943535 kubelet[3516]: E0113 23:41:52.943482 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7569cdf946-hmfxf_calico-apiserver(0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:41:52.943653 kubelet[3516]: E0113 23:41:52.943553 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7569cdf946-hmfxf" podUID="0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb" Jan 13 23:41:52.944269 containerd[1894]: time="2026-01-13T23:41:52.944209054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 13 23:41:53.000000 audit[5561]: USER_ACCT pid=5561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:53.001827 sshd[5561]: Accepted publickey for core from 20.161.92.111 port 56428 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:41:53.009354 kernel: audit: type=1101 audit(1768347713.000:773): pid=5561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:53.009000 audit[5561]: CRED_ACQ pid=5561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:53.021151 sshd-session[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:41:53.026092 kernel: audit: type=1103 audit(1768347713.009:774): pid=5561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:53.026168 kernel: audit: type=1006 audit(1768347713.016:775): pid=5561 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 13 23:41:53.016000 audit[5561]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe1853f50 a2=3 a3=0 items=0 ppid=1 pid=5561 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:53.034525 kernel: audit: type=1300 audit(1768347713.016:775): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe1853f50 a2=3 a3=0 items=0 ppid=1 pid=5561 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:53.035156 kernel: audit: type=1327 audit(1768347713.016:775): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:41:53.016000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:41:53.044886 systemd-logind[1857]: New session 11 of user core. Jan 13 23:41:53.053292 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 23:41:53.062000 audit[5561]: USER_START pid=5561 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:53.082883 kernel: audit: type=1105 audit(1768347713.062:776): pid=5561 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:53.083030 kernel: audit: type=1103 audit(1768347713.068:777): pid=5565 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:53.068000 audit[5565]: CRED_ACQ pid=5565 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:53.252586 containerd[1894]: time="2026-01-13T23:41:53.252499316Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:41:53.254358 containerd[1894]: time="2026-01-13T23:41:53.253944839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 13 23:41:53.254358 containerd[1894]: time="2026-01-13T23:41:53.253996188Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 13 23:41:53.254579 kubelet[3516]: E0113 23:41:53.254405 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:41:53.254579 kubelet[3516]: E0113 23:41:53.254476 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:41:53.254703 kubelet[3516]: E0113 23:41:53.254591 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-p84n5_calico-system(e305c05b-4fdf-40a3-854a-8a106f493072): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 13 23:41:53.254703 kubelet[3516]: E0113 23:41:53.254666 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p84n5" podUID="e305c05b-4fdf-40a3-854a-8a106f493072" Jan 13 23:41:53.402780 sshd[5565]: Connection closed by 20.161.92.111 port 56428 Jan 13 23:41:53.403793 sshd-session[5561]: pam_unix(sshd:session): session closed for user core Jan 13 23:41:53.408000 audit[5561]: USER_END pid=5561 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:53.415302 systemd[1]: sshd@9-172.31.24.127:22-20.161.92.111:56428.service: Deactivated successfully. Jan 13 23:41:53.409000 audit[5561]: CRED_DISP pid=5561 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:53.425037 kernel: audit: type=1106 audit(1768347713.408:778): pid=5561 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:53.425179 kernel: audit: type=1104 audit(1768347713.409:779): pid=5561 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:53.421892 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 23:41:53.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.24.127:22-20.161.92.111:56428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:53.426007 systemd-logind[1857]: Session 11 logged out. Waiting for processes to exit. Jan 13 23:41:53.431303 systemd-logind[1857]: Removed session 11. Jan 13 23:41:53.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.24.127:22-20.161.92.111:56430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:53.499193 systemd[1]: Started sshd@10-172.31.24.127:22-20.161.92.111:56430.service - OpenSSH per-connection server daemon (20.161.92.111:56430). Jan 13 23:41:53.971000 audit[5578]: USER_ACCT pid=5578 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:53.972601 sshd[5578]: Accepted publickey for core from 20.161.92.111 port 56430 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:41:53.974000 audit[5578]: CRED_ACQ pid=5578 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:53.974000 audit[5578]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff1ca3730 a2=3 a3=0 items=0 ppid=1 pid=5578 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:53.974000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:41:53.977741 sshd-session[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:41:53.996712 systemd-logind[1857]: New session 12 of user core. Jan 13 23:41:54.004257 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 23:41:54.013000 audit[5578]: USER_START pid=5578 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:54.019000 audit[5588]: CRED_ACQ pid=5588 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:54.439667 sshd[5588]: Connection closed by 20.161.92.111 port 56430 Jan 13 23:41:54.440414 sshd-session[5578]: pam_unix(sshd:session): session closed for user core Jan 13 23:41:54.444000 audit[5578]: USER_END pid=5578 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:54.445000 audit[5578]: CRED_DISP pid=5578 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:54.452331 systemd[1]: sshd@10-172.31.24.127:22-20.161.92.111:56430.service: Deactivated successfully. Jan 13 23:41:54.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.24.127:22-20.161.92.111:56430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:54.458428 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 23:41:54.462047 systemd-logind[1857]: Session 12 logged out. Waiting for processes to exit. Jan 13 23:41:54.466988 systemd-logind[1857]: Removed session 12. Jan 13 23:41:54.538797 systemd[1]: Started sshd@11-172.31.24.127:22-20.161.92.111:56442.service - OpenSSH per-connection server daemon (20.161.92.111:56442). Jan 13 23:41:54.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.24.127:22-20.161.92.111:56442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:55.014000 audit[5598]: USER_ACCT pid=5598 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:55.016817 sshd[5598]: Accepted publickey for core from 20.161.92.111 port 56442 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:41:55.017000 audit[5598]: CRED_ACQ pid=5598 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:55.018000 audit[5598]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff4c697a0 a2=3 a3=0 items=0 ppid=1 pid=5598 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:41:55.018000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:41:55.021863 sshd-session[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:41:55.038010 systemd-logind[1857]: New session 13 of user core. Jan 13 23:41:55.043299 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 23:41:55.053000 audit[5598]: USER_START pid=5598 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:55.058000 audit[5602]: CRED_ACQ pid=5602 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:55.436097 sshd[5602]: Connection closed by 20.161.92.111 port 56442 Jan 13 23:41:55.437976 sshd-session[5598]: pam_unix(sshd:session): session closed for user core Jan 13 23:41:55.440000 audit[5598]: USER_END pid=5598 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:55.441000 audit[5598]: CRED_DISP pid=5598 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:41:55.447570 systemd-logind[1857]: Session 13 logged out. Waiting for processes to exit. Jan 13 23:41:55.448028 systemd[1]: sshd@11-172.31.24.127:22-20.161.92.111:56442.service: Deactivated successfully. Jan 13 23:41:55.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.24.127:22-20.161.92.111:56442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:41:55.453830 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 23:41:55.458351 systemd-logind[1857]: Removed session 13. Jan 13 23:42:00.397152 kubelet[3516]: E0113 23:42:00.397016 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6488775c94-xpm9n" podUID="c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a" Jan 13 23:42:00.532877 systemd[1]: Started sshd@12-172.31.24.127:22-20.161.92.111:56450.service - OpenSSH per-connection server daemon (20.161.92.111:56450). Jan 13 23:42:00.540714 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 13 23:42:00.540798 kernel: audit: type=1130 audit(1768347720.532:799): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.24.127:22-20.161.92.111:56450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:00.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.24.127:22-20.161.92.111:56450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:01.005000 audit[5622]: USER_ACCT pid=5622 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:01.006970 sshd[5622]: Accepted publickey for core from 20.161.92.111 port 56450 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:42:01.013959 kernel: audit: type=1101 audit(1768347721.005:800): pid=5622 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:01.013000 audit[5622]: CRED_ACQ pid=5622 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:01.017006 sshd-session[5622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:42:01.025132 kernel: audit: type=1103 audit(1768347721.013:801): pid=5622 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:01.025264 kernel: audit: type=1006 audit(1768347721.013:802): pid=5622 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 13 23:42:01.013000 audit[5622]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcf5a8d00 a2=3 a3=0 items=0 ppid=1 pid=5622 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:01.033172 kernel: audit: type=1300 audit(1768347721.013:802): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcf5a8d00 a2=3 a3=0 items=0 ppid=1 pid=5622 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:01.013000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:42:01.036281 kernel: audit: type=1327 audit(1768347721.013:802): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:42:01.042311 systemd-logind[1857]: New session 14 of user core. Jan 13 23:42:01.051010 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 23:42:01.061000 audit[5622]: USER_START pid=5622 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:01.070967 kernel: audit: type=1105 audit(1768347721.061:803): pid=5622 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:01.070000 audit[5649]: CRED_ACQ pid=5649 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:01.084959 kernel: audit: type=1103 audit(1768347721.070:804): pid=5649 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:01.383734 sshd[5649]: Connection closed by 20.161.92.111 port 56450 Jan 13 23:42:01.385222 sshd-session[5622]: pam_unix(sshd:session): session closed for user core Jan 13 23:42:01.424000 audit[5622]: USER_END pid=5622 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:01.434266 systemd[1]: sshd@12-172.31.24.127:22-20.161.92.111:56450.service: Deactivated successfully. Jan 13 23:42:01.428000 audit[5622]: CRED_DISP pid=5622 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:01.440384 kernel: audit: type=1106 audit(1768347721.424:805): pid=5622 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:01.440502 kernel: audit: type=1104 audit(1768347721.428:806): pid=5622 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:01.441002 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 23:42:01.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.24.127:22-20.161.92.111:56450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:01.445417 systemd-logind[1857]: Session 14 logged out. Waiting for processes to exit. Jan 13 23:42:01.447969 systemd-logind[1857]: Removed session 14. Jan 13 23:42:03.396317 kubelet[3516]: E0113 23:42:03.396247 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xqbbk" podUID="497e3e66-1726-45f4-a990-23061cc5868e" Jan 13 23:42:03.400449 kubelet[3516]: E0113 23:42:03.400337 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d477b98f6-wrppl" podUID="999f0f43-0933-4443-a84f-03be4dcf7cf6" Jan 13 23:42:04.395868 kubelet[3516]: E0113 23:42:04.395762 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7569cdf946-2r7qk" podUID="bc4e2f94-7e3c-446b-9bce-55e8c6abc38d" Jan 13 23:42:04.398367 kubelet[3516]: E0113 23:42:04.398162 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p84n5" podUID="e305c05b-4fdf-40a3-854a-8a106f493072" Jan 13 23:42:06.487796 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:42:06.488015 kernel: audit: type=1130 audit(1768347726.479:808): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.24.127:22-20.161.92.111:38600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:06.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.24.127:22-20.161.92.111:38600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:06.480279 systemd[1]: Started sshd@13-172.31.24.127:22-20.161.92.111:38600.service - OpenSSH per-connection server daemon (20.161.92.111:38600). Jan 13 23:42:06.955000 audit[5664]: USER_ACCT pid=5664 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:06.963148 sshd[5664]: Accepted publickey for core from 20.161.92.111 port 38600 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:42:06.962000 audit[5664]: CRED_ACQ pid=5664 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:06.970172 kernel: audit: type=1101 audit(1768347726.955:809): pid=5664 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:06.970296 kernel: audit: type=1103 audit(1768347726.962:810): pid=5664 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:06.965810 sshd-session[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:42:06.975311 kernel: audit: type=1006 audit(1768347726.963:811): pid=5664 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 13 23:42:06.963000 audit[5664]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffbc73690 a2=3 a3=0 items=0 ppid=1 pid=5664 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:06.982265 kernel: audit: type=1300 audit(1768347726.963:811): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffbc73690 a2=3 a3=0 items=0 ppid=1 pid=5664 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:06.963000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:42:06.985035 kernel: audit: type=1327 audit(1768347726.963:811): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:42:06.992076 systemd-logind[1857]: New session 15 of user core. Jan 13 23:42:06.998748 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 23:42:07.006000 audit[5664]: USER_START pid=5664 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:07.016983 kernel: audit: type=1105 audit(1768347727.006:812): pid=5664 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:07.017000 audit[5668]: CRED_ACQ pid=5668 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:07.024962 kernel: audit: type=1103 audit(1768347727.017:813): pid=5668 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:07.339225 sshd[5668]: Connection closed by 20.161.92.111 port 38600 Jan 13 23:42:07.339074 sshd-session[5664]: pam_unix(sshd:session): session closed for user core Jan 13 23:42:07.341000 audit[5664]: USER_END pid=5664 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:07.350152 systemd[1]: sshd@13-172.31.24.127:22-20.161.92.111:38600.service: Deactivated successfully. Jan 13 23:42:07.342000 audit[5664]: CRED_DISP pid=5664 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:07.355654 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 23:42:07.356926 kernel: audit: type=1106 audit(1768347727.341:814): pid=5664 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:07.357515 kernel: audit: type=1104 audit(1768347727.342:815): pid=5664 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:07.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.24.127:22-20.161.92.111:38600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:07.363625 systemd-logind[1857]: Session 15 logged out. Waiting for processes to exit. Jan 13 23:42:07.366971 systemd-logind[1857]: Removed session 15. Jan 13 23:42:07.395872 kubelet[3516]: E0113 23:42:07.395795 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7569cdf946-hmfxf" podUID="0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb" Jan 13 23:42:12.398933 containerd[1894]: time="2026-01-13T23:42:12.398194789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 13 23:42:12.439935 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:42:12.440071 kernel: audit: type=1130 audit(1768347732.436:817): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.24.127:22-20.161.92.111:52422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:12.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.24.127:22-20.161.92.111:52422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:12.436835 systemd[1]: Started sshd@14-172.31.24.127:22-20.161.92.111:52422.service - OpenSSH per-connection server daemon (20.161.92.111:52422). Jan 13 23:42:12.676250 containerd[1894]: time="2026-01-13T23:42:12.676081045Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:42:12.678106 containerd[1894]: time="2026-01-13T23:42:12.678018873Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 13 23:42:12.678264 containerd[1894]: time="2026-01-13T23:42:12.678161493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 13 23:42:12.678616 kubelet[3516]: E0113 23:42:12.678525 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:42:12.679189 kubelet[3516]: E0113 23:42:12.678621 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:42:12.679189 kubelet[3516]: E0113 23:42:12.678779 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6488775c94-xpm9n_calico-system(c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 13 23:42:12.679189 kubelet[3516]: E0113 23:42:12.678848 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6488775c94-xpm9n" podUID="c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a" Jan 13 23:42:12.933000 audit[5684]: USER_ACCT pid=5684 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:12.942752 sshd[5684]: Accepted publickey for core from 20.161.92.111 port 52422 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:42:12.944716 sshd-session[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:42:12.942000 audit[5684]: CRED_ACQ pid=5684 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:12.953731 kernel: audit: type=1101 audit(1768347732.933:818): pid=5684 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:12.953893 kernel: audit: type=1103 audit(1768347732.942:819): pid=5684 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:12.958094 kernel: audit: type=1006 audit(1768347732.942:820): pid=5684 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 13 23:42:12.942000 audit[5684]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc102b9d0 a2=3 a3=0 items=0 ppid=1 pid=5684 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.966820 kernel: audit: type=1300 audit(1768347732.942:820): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc102b9d0 a2=3 a3=0 items=0 ppid=1 pid=5684 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:12.942000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:42:12.972748 kernel: audit: type=1327 audit(1768347732.942:820): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:42:12.976387 systemd-logind[1857]: New session 16 of user core. Jan 13 23:42:12.984269 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 23:42:12.992000 audit[5684]: USER_START pid=5684 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:13.003000 audit[5688]: CRED_ACQ pid=5688 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:13.010344 kernel: audit: type=1105 audit(1768347732.992:821): pid=5684 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:13.010503 kernel: audit: type=1103 audit(1768347733.003:822): pid=5688 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:13.472953 sshd[5688]: Connection closed by 20.161.92.111 port 52422 Jan 13 23:42:13.473052 sshd-session[5684]: pam_unix(sshd:session): session closed for user core Jan 13 23:42:13.479000 audit[5684]: USER_END pid=5684 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:13.495061 systemd[1]: sshd@14-172.31.24.127:22-20.161.92.111:52422.service: Deactivated successfully. Jan 13 23:42:13.480000 audit[5684]: CRED_DISP pid=5684 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:13.501507 kernel: audit: type=1106 audit(1768347733.479:823): pid=5684 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:13.502007 kernel: audit: type=1104 audit(1768347733.480:824): pid=5684 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:13.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.24.127:22-20.161.92.111:52422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:13.506741 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 23:42:13.513828 systemd-logind[1857]: Session 16 logged out. Waiting for processes to exit. Jan 13 23:42:13.519846 systemd-logind[1857]: Removed session 16. Jan 13 23:42:14.398431 containerd[1894]: time="2026-01-13T23:42:14.397671018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 13 23:42:14.664149 containerd[1894]: time="2026-01-13T23:42:14.663837376Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:42:14.666122 containerd[1894]: time="2026-01-13T23:42:14.666050082Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 13 23:42:14.666247 containerd[1894]: time="2026-01-13T23:42:14.666163467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 13 23:42:14.666509 kubelet[3516]: E0113 23:42:14.666438 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:42:14.667115 kubelet[3516]: E0113 23:42:14.666505 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:42:14.667115 kubelet[3516]: E0113 23:42:14.666620 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-d477b98f6-wrppl_calico-system(999f0f43-0933-4443-a84f-03be4dcf7cf6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 13 23:42:14.669149 containerd[1894]: time="2026-01-13T23:42:14.668812543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 13 23:42:14.938398 containerd[1894]: time="2026-01-13T23:42:14.938108553Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:42:14.940619 containerd[1894]: time="2026-01-13T23:42:14.940428809Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 13 23:42:14.940619 containerd[1894]: time="2026-01-13T23:42:14.940434656Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 13 23:42:14.940838 kubelet[3516]: E0113 23:42:14.940782 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:42:14.940930 kubelet[3516]: E0113 23:42:14.940840 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:42:14.941001 kubelet[3516]: E0113 23:42:14.940978 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-d477b98f6-wrppl_calico-system(999f0f43-0933-4443-a84f-03be4dcf7cf6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 13 23:42:14.941078 kubelet[3516]: E0113 23:42:14.941048 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d477b98f6-wrppl" podUID="999f0f43-0933-4443-a84f-03be4dcf7cf6" Jan 13 23:42:16.399181 containerd[1894]: time="2026-01-13T23:42:16.397235682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 13 23:42:16.673401 containerd[1894]: time="2026-01-13T23:42:16.673042703Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:42:16.675405 containerd[1894]: time="2026-01-13T23:42:16.675337025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 13 23:42:16.675534 containerd[1894]: time="2026-01-13T23:42:16.675458538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 13 23:42:16.675763 kubelet[3516]: E0113 23:42:16.675709 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:42:16.676477 kubelet[3516]: E0113 23:42:16.675775 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:42:16.676477 kubelet[3516]: E0113 23:42:16.675952 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xqbbk_calico-system(497e3e66-1726-45f4-a990-23061cc5868e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 13 23:42:16.676477 kubelet[3516]: E0113 23:42:16.676006 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xqbbk" podUID="497e3e66-1726-45f4-a990-23061cc5868e" Jan 13 23:42:17.396923 containerd[1894]: time="2026-01-13T23:42:17.396763811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:42:17.662172 containerd[1894]: time="2026-01-13T23:42:17.662024774Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:42:17.664260 containerd[1894]: time="2026-01-13T23:42:17.664185818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:42:17.664408 containerd[1894]: time="2026-01-13T23:42:17.664305493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:42:17.664593 kubelet[3516]: E0113 23:42:17.664534 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:42:17.664734 kubelet[3516]: E0113 23:42:17.664605 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:42:17.665523 kubelet[3516]: E0113 23:42:17.664753 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7569cdf946-2r7qk_calico-apiserver(bc4e2f94-7e3c-446b-9bce-55e8c6abc38d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:42:17.665523 kubelet[3516]: E0113 23:42:17.664809 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7569cdf946-2r7qk" podUID="bc4e2f94-7e3c-446b-9bce-55e8c6abc38d" Jan 13 23:42:18.397229 containerd[1894]: time="2026-01-13T23:42:18.395830526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 13 23:42:18.572319 systemd[1]: Started sshd@15-172.31.24.127:22-20.161.92.111:52424.service - OpenSSH per-connection server daemon (20.161.92.111:52424). Jan 13 23:42:18.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.24.127:22-20.161.92.111:52424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:18.574471 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:42:18.574566 kernel: audit: type=1130 audit(1768347738.572:826): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.24.127:22-20.161.92.111:52424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:18.639410 containerd[1894]: time="2026-01-13T23:42:18.639200684Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:42:18.641424 containerd[1894]: time="2026-01-13T23:42:18.641365318Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 13 23:42:18.641871 containerd[1894]: time="2026-01-13T23:42:18.641500121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 13 23:42:18.642311 kubelet[3516]: E0113 23:42:18.642255 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:42:18.643296 kubelet[3516]: E0113 23:42:18.642325 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:42:18.643296 kubelet[3516]: E0113 23:42:18.642482 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-p84n5_calico-system(e305c05b-4fdf-40a3-854a-8a106f493072): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 13 23:42:18.644964 containerd[1894]: time="2026-01-13T23:42:18.644538288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 13 23:42:18.924516 containerd[1894]: time="2026-01-13T23:42:18.924302558Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:42:18.926733 containerd[1894]: time="2026-01-13T23:42:18.926492020Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 13 23:42:18.926733 containerd[1894]: time="2026-01-13T23:42:18.926640918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 13 23:42:18.927430 kubelet[3516]: E0113 23:42:18.927270 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:42:18.927430 kubelet[3516]: E0113 23:42:18.927372 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:42:18.928204 kubelet[3516]: E0113 23:42:18.927953 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-p84n5_calico-system(e305c05b-4fdf-40a3-854a-8a106f493072): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 13 23:42:18.928204 kubelet[3516]: E0113 23:42:18.928048 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p84n5" podUID="e305c05b-4fdf-40a3-854a-8a106f493072" Jan 13 23:42:19.069000 audit[5711]: USER_ACCT pid=5711 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:19.077500 sshd[5711]: Accepted publickey for core from 20.161.92.111 port 52424 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:42:19.077000 audit[5711]: CRED_ACQ pid=5711 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:19.084495 kernel: audit: type=1101 audit(1768347739.069:827): pid=5711 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:19.084649 kernel: audit: type=1103 audit(1768347739.077:828): pid=5711 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:19.080019 sshd-session[5711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:42:19.088565 kernel: audit: type=1006 audit(1768347739.077:829): pid=5711 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 13 23:42:19.077000 audit[5711]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd1ec2b90 a2=3 a3=0 items=0 ppid=1 pid=5711 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:19.095071 kernel: audit: type=1300 audit(1768347739.077:829): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd1ec2b90 a2=3 a3=0 items=0 ppid=1 pid=5711 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:19.077000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:42:19.098336 kernel: audit: type=1327 audit(1768347739.077:829): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:42:19.105809 systemd-logind[1857]: New session 17 of user core. Jan 13 23:42:19.114282 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 23:42:19.122000 audit[5711]: USER_START pid=5711 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:19.130950 kernel: audit: type=1105 audit(1768347739.122:830): pid=5711 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:19.131000 audit[5715]: CRED_ACQ pid=5715 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:19.138982 kernel: audit: type=1103 audit(1768347739.131:831): pid=5715 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:19.449819 sshd[5715]: Connection closed by 20.161.92.111 port 52424 Jan 13 23:42:19.451250 sshd-session[5711]: pam_unix(sshd:session): session closed for user core Jan 13 23:42:19.454000 audit[5711]: USER_END pid=5711 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:19.461084 systemd[1]: sshd@15-172.31.24.127:22-20.161.92.111:52424.service: Deactivated successfully. Jan 13 23:42:19.461990 systemd-logind[1857]: Session 17 logged out. Waiting for processes to exit. Jan 13 23:42:19.454000 audit[5711]: CRED_DISP pid=5711 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:19.465060 kernel: audit: type=1106 audit(1768347739.454:832): pid=5711 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:19.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.24.127:22-20.161.92.111:52424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:19.471947 kernel: audit: type=1104 audit(1768347739.454:833): pid=5711 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:19.473387 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 23:42:19.482821 systemd-logind[1857]: Removed session 17. Jan 13 23:42:19.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.24.127:22-20.161.92.111:52438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:19.546619 systemd[1]: Started sshd@16-172.31.24.127:22-20.161.92.111:52438.service - OpenSSH per-connection server daemon (20.161.92.111:52438). Jan 13 23:42:20.014000 audit[5727]: USER_ACCT pid=5727 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:20.015604 sshd[5727]: Accepted publickey for core from 20.161.92.111 port 52438 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:42:20.016000 audit[5727]: CRED_ACQ pid=5727 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:20.016000 audit[5727]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe5d15410 a2=3 a3=0 items=0 ppid=1 pid=5727 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:20.016000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:42:20.020207 sshd-session[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:42:20.032847 systemd-logind[1857]: New session 18 of user core. Jan 13 23:42:20.041303 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 23:42:20.048000 audit[5727]: USER_START pid=5727 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:20.052000 audit[5731]: CRED_ACQ pid=5731 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:21.398478 containerd[1894]: time="2026-01-13T23:42:21.398410763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:42:21.631092 sshd[5731]: Connection closed by 20.161.92.111 port 52438 Jan 13 23:42:21.631556 sshd-session[5727]: pam_unix(sshd:session): session closed for user core Jan 13 23:42:21.635000 audit[5727]: USER_END pid=5727 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:21.635000 audit[5727]: CRED_DISP pid=5727 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:21.642000 systemd[1]: sshd@16-172.31.24.127:22-20.161.92.111:52438.service: Deactivated successfully. Jan 13 23:42:21.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.24.127:22-20.161.92.111:52438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:21.646371 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 23:42:21.649063 systemd-logind[1857]: Session 18 logged out. Waiting for processes to exit. Jan 13 23:42:21.653687 systemd-logind[1857]: Removed session 18. Jan 13 23:42:21.684892 containerd[1894]: time="2026-01-13T23:42:21.684653459Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:42:21.687195 containerd[1894]: time="2026-01-13T23:42:21.687026409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:42:21.687195 containerd[1894]: time="2026-01-13T23:42:21.687097208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:42:21.687470 kubelet[3516]: E0113 23:42:21.687411 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:42:21.688054 kubelet[3516]: E0113 23:42:21.687475 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:42:21.688054 kubelet[3516]: E0113 23:42:21.687585 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7569cdf946-hmfxf_calico-apiserver(0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:42:21.688054 kubelet[3516]: E0113 23:42:21.687636 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7569cdf946-hmfxf" podUID="0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb" Jan 13 23:42:21.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.24.127:22-20.161.92.111:52444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:21.728338 systemd[1]: Started sshd@17-172.31.24.127:22-20.161.92.111:52444.service - OpenSSH per-connection server daemon (20.161.92.111:52444). Jan 13 23:42:22.211000 audit[5742]: USER_ACCT pid=5742 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:22.212468 sshd[5742]: Accepted publickey for core from 20.161.92.111 port 52444 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:42:22.214000 audit[5742]: CRED_ACQ pid=5742 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:22.214000 audit[5742]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffca69bed0 a2=3 a3=0 items=0 ppid=1 pid=5742 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:22.214000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:42:22.217153 sshd-session[5742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:42:22.230542 systemd-logind[1857]: New session 19 of user core. Jan 13 23:42:22.246355 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 23:42:22.254000 audit[5742]: USER_START pid=5742 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:22.258000 audit[5746]: CRED_ACQ pid=5746 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:23.625947 kernel: kauditd_printk_skb: 20 callbacks suppressed Jan 13 23:42:23.626142 kernel: audit: type=1325 audit(1768347743.624:850): table=filter:143 family=2 entries=26 op=nft_register_rule pid=5770 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:42:23.624000 audit[5770]: NETFILTER_CFG table=filter:143 family=2 entries=26 op=nft_register_rule pid=5770 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:42:23.624000 audit[5770]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffff7bb7090 a2=0 a3=1 items=0 ppid=3663 pid=5770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:23.638860 kernel: audit: type=1300 audit(1768347743.624:850): arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffff7bb7090 a2=0 a3=1 items=0 ppid=3663 pid=5770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:23.639044 kernel: audit: type=1327 audit(1768347743.624:850): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:42:23.624000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:42:23.646000 audit[5770]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5770 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:42:23.646000 audit[5770]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff7bb7090 a2=0 a3=1 items=0 ppid=3663 pid=5770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:23.662796 kernel: audit: type=1325 audit(1768347743.646:851): table=nat:144 family=2 entries=20 op=nft_register_rule pid=5770 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:42:23.662975 kernel: audit: type=1300 audit(1768347743.646:851): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff7bb7090 a2=0 a3=1 items=0 ppid=3663 pid=5770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:23.646000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:42:23.669548 kernel: audit: type=1327 audit(1768347743.646:851): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:42:23.742630 sshd[5746]: Connection closed by 20.161.92.111 port 52444 Jan 13 23:42:23.743862 sshd-session[5742]: pam_unix(sshd:session): session closed for user core Jan 13 23:42:23.748000 audit[5742]: USER_END pid=5742 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:23.757131 systemd[1]: sshd@17-172.31.24.127:22-20.161.92.111:52444.service: Deactivated successfully. Jan 13 23:42:23.749000 audit[5742]: CRED_DISP pid=5742 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:23.763772 kernel: audit: type=1106 audit(1768347743.748:852): pid=5742 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:23.765385 kernel: audit: type=1104 audit(1768347743.749:853): pid=5742 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:23.765495 kernel: audit: type=1131 audit(1768347743.756:854): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.24.127:22-20.161.92.111:52444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:23.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.24.127:22-20.161.92.111:52444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:23.766009 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 23:42:23.770103 systemd-logind[1857]: Session 19 logged out. Waiting for processes to exit. Jan 13 23:42:23.776413 systemd-logind[1857]: Removed session 19. Jan 13 23:42:23.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.24.127:22-20.161.92.111:36892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:23.841856 systemd[1]: Started sshd@18-172.31.24.127:22-20.161.92.111:36892.service - OpenSSH per-connection server daemon (20.161.92.111:36892). Jan 13 23:42:23.848962 kernel: audit: type=1130 audit(1768347743.841:855): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.24.127:22-20.161.92.111:36892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:24.329000 audit[5775]: USER_ACCT pid=5775 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:24.331132 sshd[5775]: Accepted publickey for core from 20.161.92.111 port 36892 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:42:24.331000 audit[5775]: CRED_ACQ pid=5775 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:24.332000 audit[5775]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeb6cc0a0 a2=3 a3=0 items=0 ppid=1 pid=5775 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:24.332000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:42:24.334605 sshd-session[5775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:42:24.346254 systemd-logind[1857]: New session 20 of user core. Jan 13 23:42:24.365285 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 23:42:24.373000 audit[5775]: USER_START pid=5775 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:24.377000 audit[5779]: CRED_ACQ pid=5779 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:24.692000 audit[5787]: NETFILTER_CFG table=filter:145 family=2 entries=38 op=nft_register_rule pid=5787 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:42:24.692000 audit[5787]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffffe1d5310 a2=0 a3=1 items=0 ppid=3663 pid=5787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:24.692000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:42:24.702000 audit[5787]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=5787 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:42:24.702000 audit[5787]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffffe1d5310 a2=0 a3=1 items=0 ppid=3663 pid=5787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:24.702000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:42:24.999924 sshd[5779]: Connection closed by 20.161.92.111 port 36892 Jan 13 23:42:25.001279 sshd-session[5775]: pam_unix(sshd:session): session closed for user core Jan 13 23:42:25.004000 audit[5775]: USER_END pid=5775 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:25.004000 audit[5775]: CRED_DISP pid=5775 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:25.012212 systemd[1]: sshd@18-172.31.24.127:22-20.161.92.111:36892.service: Deactivated successfully. Jan 13 23:42:25.012327 systemd-logind[1857]: Session 20 logged out. Waiting for processes to exit. Jan 13 23:42:25.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.24.127:22-20.161.92.111:36892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:25.020831 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 23:42:25.027255 systemd-logind[1857]: Removed session 20. Jan 13 23:42:25.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.24.127:22-20.161.92.111:36906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:25.093435 systemd[1]: Started sshd@19-172.31.24.127:22-20.161.92.111:36906.service - OpenSSH per-connection server daemon (20.161.92.111:36906). Jan 13 23:42:25.414117 kubelet[3516]: E0113 23:42:25.413422 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d477b98f6-wrppl" podUID="999f0f43-0933-4443-a84f-03be4dcf7cf6" Jan 13 23:42:25.589000 audit[5792]: USER_ACCT pid=5792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:25.590747 sshd[5792]: Accepted publickey for core from 20.161.92.111 port 36906 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:42:25.592000 audit[5792]: CRED_ACQ pid=5792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:25.592000 audit[5792]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe6801df0 a2=3 a3=0 items=0 ppid=1 pid=5792 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:25.592000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:42:25.595369 sshd-session[5792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:42:25.609024 systemd-logind[1857]: New session 21 of user core. Jan 13 23:42:25.615352 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 23:42:25.623000 audit[5792]: USER_START pid=5792 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:25.627000 audit[5796]: CRED_ACQ pid=5796 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:25.958510 sshd[5796]: Connection closed by 20.161.92.111 port 36906 Jan 13 23:42:25.959360 sshd-session[5792]: pam_unix(sshd:session): session closed for user core Jan 13 23:42:25.960000 audit[5792]: USER_END pid=5792 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:25.961000 audit[5792]: CRED_DISP pid=5792 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:25.968117 systemd[1]: sshd@19-172.31.24.127:22-20.161.92.111:36906.service: Deactivated successfully. Jan 13 23:42:25.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.24.127:22-20.161.92.111:36906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:25.973510 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 23:42:25.975768 systemd-logind[1857]: Session 21 logged out. Waiting for processes to exit. Jan 13 23:42:25.979336 systemd-logind[1857]: Removed session 21. Jan 13 23:42:28.396014 kubelet[3516]: E0113 23:42:28.395828 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6488775c94-xpm9n" podUID="c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a" Jan 13 23:42:29.395723 kubelet[3516]: E0113 23:42:29.395280 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7569cdf946-2r7qk" podUID="bc4e2f94-7e3c-446b-9bce-55e8c6abc38d" Jan 13 23:42:30.442000 audit[5808]: NETFILTER_CFG table=filter:147 family=2 entries=26 op=nft_register_rule pid=5808 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:42:30.445099 kernel: kauditd_printk_skb: 27 callbacks suppressed Jan 13 23:42:30.445207 kernel: audit: type=1325 audit(1768347750.442:875): table=filter:147 family=2 entries=26 op=nft_register_rule pid=5808 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:42:30.442000 audit[5808]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe8e2c430 a2=0 a3=1 items=0 ppid=3663 pid=5808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:30.454911 kernel: audit: type=1300 audit(1768347750.442:875): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe8e2c430 a2=0 a3=1 items=0 ppid=3663 pid=5808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:30.455059 kernel: audit: type=1327 audit(1768347750.442:875): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:42:30.442000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:42:30.458000 audit[5808]: NETFILTER_CFG table=nat:148 family=2 entries=104 op=nft_register_chain pid=5808 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:42:30.458000 audit[5808]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffe8e2c430 a2=0 a3=1 items=0 ppid=3663 pid=5808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:30.465016 kernel: audit: type=1325 audit(1768347750.458:876): table=nat:148 family=2 entries=104 op=nft_register_chain pid=5808 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:42:30.474665 kernel: audit: type=1300 audit(1768347750.458:876): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffe8e2c430 a2=0 a3=1 items=0 ppid=3663 pid=5808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:30.474773 kernel: audit: type=1327 audit(1768347750.458:876): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:42:30.458000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:42:31.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.24.127:22-20.161.92.111:36918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:31.061616 systemd[1]: Started sshd@20-172.31.24.127:22-20.161.92.111:36918.service - OpenSSH per-connection server daemon (20.161.92.111:36918). Jan 13 23:42:31.074016 kernel: audit: type=1130 audit(1768347751.060:877): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.24.127:22-20.161.92.111:36918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:31.397216 kubelet[3516]: E0113 23:42:31.396596 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xqbbk" podUID="497e3e66-1726-45f4-a990-23061cc5868e" Jan 13 23:42:31.556000 audit[5835]: USER_ACCT pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:31.558611 sshd[5835]: Accepted publickey for core from 20.161.92.111 port 36918 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:42:31.565003 kernel: audit: type=1101 audit(1768347751.556:878): pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:31.564000 audit[5835]: CRED_ACQ pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:31.572349 sshd-session[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:42:31.575529 kernel: audit: type=1103 audit(1768347751.564:879): pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:31.575646 kernel: audit: type=1006 audit(1768347751.570:880): pid=5835 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 13 23:42:31.570000 audit[5835]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffef0a87a0 a2=3 a3=0 items=0 ppid=1 pid=5835 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:31.570000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:42:31.586001 systemd-logind[1857]: New session 22 of user core. Jan 13 23:42:31.592228 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 23:42:31.598000 audit[5835]: USER_START pid=5835 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:31.602000 audit[5840]: CRED_ACQ pid=5840 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:31.916599 sshd[5840]: Connection closed by 20.161.92.111 port 36918 Jan 13 23:42:31.917757 sshd-session[5835]: pam_unix(sshd:session): session closed for user core Jan 13 23:42:31.920000 audit[5835]: USER_END pid=5835 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:31.921000 audit[5835]: CRED_DISP pid=5835 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:31.927024 systemd[1]: sshd@20-172.31.24.127:22-20.161.92.111:36918.service: Deactivated successfully. Jan 13 23:42:31.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.24.127:22-20.161.92.111:36918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:31.935766 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 23:42:31.939823 systemd-logind[1857]: Session 22 logged out. Waiting for processes to exit. Jan 13 23:42:31.946813 systemd-logind[1857]: Removed session 22. Jan 13 23:42:32.398275 kubelet[3516]: E0113 23:42:32.398140 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p84n5" podUID="e305c05b-4fdf-40a3-854a-8a106f493072" Jan 13 23:42:36.400918 kubelet[3516]: E0113 23:42:36.400229 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7569cdf946-hmfxf" podUID="0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb" Jan 13 23:42:37.016166 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 13 23:42:37.016313 kernel: audit: type=1130 audit(1768347757.013:886): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.24.127:22-20.161.92.111:43418 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:37.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.24.127:22-20.161.92.111:43418 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:37.014004 systemd[1]: Started sshd@21-172.31.24.127:22-20.161.92.111:43418.service - OpenSSH per-connection server daemon (20.161.92.111:43418). Jan 13 23:42:37.401303 kubelet[3516]: E0113 23:42:37.400775 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d477b98f6-wrppl" podUID="999f0f43-0933-4443-a84f-03be4dcf7cf6" Jan 13 23:42:37.479000 audit[5855]: USER_ACCT pid=5855 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:37.480599 sshd[5855]: Accepted publickey for core from 20.161.92.111 port 43418 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:42:37.489943 kernel: audit: type=1101 audit(1768347757.479:887): pid=5855 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:37.488000 audit[5855]: CRED_ACQ pid=5855 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:37.499369 sshd-session[5855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:42:37.503922 kernel: audit: type=1103 audit(1768347757.488:888): pid=5855 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:37.504534 kernel: audit: type=1006 audit(1768347757.495:889): pid=5855 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 13 23:42:37.495000 audit[5855]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdae85f60 a2=3 a3=0 items=0 ppid=1 pid=5855 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:37.513122 kernel: audit: type=1300 audit(1768347757.495:889): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdae85f60 a2=3 a3=0 items=0 ppid=1 pid=5855 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:37.514541 kernel: audit: type=1327 audit(1768347757.495:889): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:42:37.495000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:42:37.527975 systemd-logind[1857]: New session 23 of user core. Jan 13 23:42:37.532259 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 23:42:37.541000 audit[5855]: USER_START pid=5855 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:37.553176 kernel: audit: type=1105 audit(1768347757.541:890): pid=5855 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:37.554000 audit[5859]: CRED_ACQ pid=5859 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:37.563943 kernel: audit: type=1103 audit(1768347757.554:891): pid=5859 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:37.903134 sshd[5859]: Connection closed by 20.161.92.111 port 43418 Jan 13 23:42:37.903538 sshd-session[5855]: pam_unix(sshd:session): session closed for user core Jan 13 23:42:37.907000 audit[5855]: USER_END pid=5855 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:37.917823 systemd[1]: sshd@21-172.31.24.127:22-20.161.92.111:43418.service: Deactivated successfully. Jan 13 23:42:37.907000 audit[5855]: CRED_DISP pid=5855 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:37.932451 kernel: audit: type=1106 audit(1768347757.907:892): pid=5855 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:37.932632 kernel: audit: type=1104 audit(1768347757.907:893): pid=5855 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:37.931349 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 23:42:37.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.24.127:22-20.161.92.111:43418 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:37.939736 systemd-logind[1857]: Session 23 logged out. Waiting for processes to exit. Jan 13 23:42:37.942520 systemd-logind[1857]: Removed session 23. Jan 13 23:42:40.395805 kubelet[3516]: E0113 23:42:40.395732 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6488775c94-xpm9n" podUID="c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a" Jan 13 23:42:42.401196 kubelet[3516]: E0113 23:42:42.400352 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xqbbk" podUID="497e3e66-1726-45f4-a990-23061cc5868e" Jan 13 23:42:43.009376 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:42:43.009522 kernel: audit: type=1130 audit(1768347763.001:895): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.24.127:22-20.161.92.111:35280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:43.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.24.127:22-20.161.92.111:35280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:42.999728 systemd[1]: Started sshd@22-172.31.24.127:22-20.161.92.111:35280.service - OpenSSH per-connection server daemon (20.161.92.111:35280). Jan 13 23:42:43.501000 audit[5874]: USER_ACCT pid=5874 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:43.504181 sshd[5874]: Accepted publickey for core from 20.161.92.111 port 35280 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:42:43.512664 kernel: audit: type=1101 audit(1768347763.501:896): pid=5874 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:43.512782 kernel: audit: type=1103 audit(1768347763.510:897): pid=5874 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:43.510000 audit[5874]: CRED_ACQ pid=5874 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:43.513740 sshd-session[5874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:42:43.520657 kernel: audit: type=1006 audit(1768347763.510:898): pid=5874 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 13 23:42:43.510000 audit[5874]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff8d54e10 a2=3 a3=0 items=0 ppid=1 pid=5874 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:43.527127 kernel: audit: type=1300 audit(1768347763.510:898): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff8d54e10 a2=3 a3=0 items=0 ppid=1 pid=5874 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:43.510000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:42:43.533532 kernel: audit: type=1327 audit(1768347763.510:898): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:42:43.539335 systemd-logind[1857]: New session 24 of user core. Jan 13 23:42:43.548275 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 23:42:43.553000 audit[5874]: USER_START pid=5874 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:43.562000 audit[5878]: CRED_ACQ pid=5878 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:43.570030 kernel: audit: type=1105 audit(1768347763.553:899): pid=5874 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:43.570158 kernel: audit: type=1103 audit(1768347763.562:900): pid=5878 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:43.899065 sshd[5878]: Connection closed by 20.161.92.111 port 35280 Jan 13 23:42:43.900145 sshd-session[5874]: pam_unix(sshd:session): session closed for user core Jan 13 23:42:43.901000 audit[5874]: USER_END pid=5874 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:43.908756 systemd[1]: sshd@22-172.31.24.127:22-20.161.92.111:35280.service: Deactivated successfully. Jan 13 23:42:43.902000 audit[5874]: CRED_DISP pid=5874 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:43.918987 kernel: audit: type=1106 audit(1768347763.901:901): pid=5874 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:43.919124 kernel: audit: type=1104 audit(1768347763.902:902): pid=5874 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:43.914162 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 23:42:43.918447 systemd-logind[1857]: Session 24 logged out. Waiting for processes to exit. Jan 13 23:42:43.922016 systemd-logind[1857]: Removed session 24. Jan 13 23:42:43.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.24.127:22-20.161.92.111:35280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:44.402926 kubelet[3516]: E0113 23:42:44.402139 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7569cdf946-2r7qk" podUID="bc4e2f94-7e3c-446b-9bce-55e8c6abc38d" Jan 13 23:42:47.400294 kubelet[3516]: E0113 23:42:47.400109 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p84n5" podUID="e305c05b-4fdf-40a3-854a-8a106f493072" Jan 13 23:42:48.399411 kubelet[3516]: E0113 23:42:48.398819 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7569cdf946-hmfxf" podUID="0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb" Jan 13 23:42:48.995446 systemd[1]: Started sshd@23-172.31.24.127:22-20.161.92.111:35284.service - OpenSSH per-connection server daemon (20.161.92.111:35284). Jan 13 23:42:49.003527 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:42:49.003632 kernel: audit: type=1130 audit(1768347768.994:904): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.24.127:22-20.161.92.111:35284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:48.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.24.127:22-20.161.92.111:35284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:49.398821 kubelet[3516]: E0113 23:42:49.398218 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d477b98f6-wrppl" podUID="999f0f43-0933-4443-a84f-03be4dcf7cf6" Jan 13 23:42:49.479000 audit[5890]: USER_ACCT pid=5890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:49.482833 sshd[5890]: Accepted publickey for core from 20.161.92.111 port 35284 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:42:49.490000 audit[5890]: CRED_ACQ pid=5890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:49.493031 sshd-session[5890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:42:49.498636 kernel: audit: type=1101 audit(1768347769.479:905): pid=5890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:49.498764 kernel: audit: type=1103 audit(1768347769.490:906): pid=5890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:49.504617 kernel: audit: type=1006 audit(1768347769.490:907): pid=5890 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 13 23:42:49.490000 audit[5890]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd76c900 a2=3 a3=0 items=0 ppid=1 pid=5890 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:49.513844 kernel: audit: type=1300 audit(1768347769.490:907): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd76c900 a2=3 a3=0 items=0 ppid=1 pid=5890 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:49.518298 kernel: audit: type=1327 audit(1768347769.490:907): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:42:49.490000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:42:49.525323 systemd-logind[1857]: New session 25 of user core. Jan 13 23:42:49.532352 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 23:42:49.539000 audit[5890]: USER_START pid=5890 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:49.549958 kernel: audit: type=1105 audit(1768347769.539:908): pid=5890 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:49.549000 audit[5894]: CRED_ACQ pid=5894 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:49.556969 kernel: audit: type=1103 audit(1768347769.549:909): pid=5894 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:49.871317 sshd[5894]: Connection closed by 20.161.92.111 port 35284 Jan 13 23:42:49.872241 sshd-session[5890]: pam_unix(sshd:session): session closed for user core Jan 13 23:42:49.876000 audit[5890]: USER_END pid=5890 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:49.892001 systemd-logind[1857]: Session 25 logged out. Waiting for processes to exit. Jan 13 23:42:49.893296 systemd[1]: sshd@23-172.31.24.127:22-20.161.92.111:35284.service: Deactivated successfully. Jan 13 23:42:49.904118 kernel: audit: type=1106 audit(1768347769.876:910): pid=5890 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:49.904257 kernel: audit: type=1104 audit(1768347769.885:911): pid=5890 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:49.885000 audit[5890]: CRED_DISP pid=5890 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:49.906056 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 23:42:49.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.24.127:22-20.161.92.111:35284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:49.915606 systemd-logind[1857]: Removed session 25. Jan 13 23:42:53.396180 containerd[1894]: time="2026-01-13T23:42:53.395767073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 13 23:42:53.830319 containerd[1894]: time="2026-01-13T23:42:53.830031259Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:42:53.833336 containerd[1894]: time="2026-01-13T23:42:53.832278139Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 13 23:42:53.833336 containerd[1894]: time="2026-01-13T23:42:53.832420939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 13 23:42:53.833593 kubelet[3516]: E0113 23:42:53.833001 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:42:53.833593 kubelet[3516]: E0113 23:42:53.833059 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:42:53.833593 kubelet[3516]: E0113 23:42:53.833159 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6488775c94-xpm9n_calico-system(c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 13 23:42:53.833593 kubelet[3516]: E0113 23:42:53.833213 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6488775c94-xpm9n" podUID="c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a" Jan 13 23:42:54.974058 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:42:54.974152 kernel: audit: type=1130 audit(1768347774.966:913): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.24.127:22-20.161.92.111:44294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:54.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.24.127:22-20.161.92.111:44294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:54.967293 systemd[1]: Started sshd@24-172.31.24.127:22-20.161.92.111:44294.service - OpenSSH per-connection server daemon (20.161.92.111:44294). Jan 13 23:42:55.397041 kubelet[3516]: E0113 23:42:55.396956 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7569cdf946-2r7qk" podUID="bc4e2f94-7e3c-446b-9bce-55e8c6abc38d" Jan 13 23:42:55.470000 audit[5912]: USER_ACCT pid=5912 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:55.478021 sshd[5912]: Accepted publickey for core from 20.161.92.111 port 44294 ssh2: RSA SHA256:/TyQMOXSzDcQPdZae+c/TZBrs6+oboUBQs8HvJ0n6ys Jan 13 23:42:55.478944 kernel: audit: type=1101 audit(1768347775.470:914): pid=5912 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:55.479000 audit[5912]: CRED_ACQ pid=5912 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:55.482676 sshd-session[5912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:42:55.493536 kernel: audit: type=1103 audit(1768347775.479:915): pid=5912 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:55.493676 kernel: audit: type=1006 audit(1768347775.479:916): pid=5912 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 13 23:42:55.479000 audit[5912]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc919e970 a2=3 a3=0 items=0 ppid=1 pid=5912 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:55.503828 kernel: audit: type=1300 audit(1768347775.479:916): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc919e970 a2=3 a3=0 items=0 ppid=1 pid=5912 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:42:55.479000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:42:55.513191 kernel: audit: type=1327 audit(1768347775.479:916): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:42:55.521768 systemd-logind[1857]: New session 26 of user core. Jan 13 23:42:55.532339 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 23:42:55.540000 audit[5912]: USER_START pid=5912 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:55.549977 kernel: audit: type=1105 audit(1768347775.540:917): pid=5912 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:55.553000 audit[5916]: CRED_ACQ pid=5916 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:55.560961 kernel: audit: type=1103 audit(1768347775.553:918): pid=5916 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:55.908057 sshd[5916]: Connection closed by 20.161.92.111 port 44294 Jan 13 23:42:55.909220 sshd-session[5912]: pam_unix(sshd:session): session closed for user core Jan 13 23:42:55.913000 audit[5912]: USER_END pid=5912 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:55.913000 audit[5912]: CRED_DISP pid=5912 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:55.930149 systemd[1]: sshd@24-172.31.24.127:22-20.161.92.111:44294.service: Deactivated successfully. Jan 13 23:42:55.932225 kernel: audit: type=1106 audit(1768347775.913:919): pid=5912 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:55.932300 kernel: audit: type=1104 audit(1768347775.913:920): pid=5912 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 13 23:42:55.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.24.127:22-20.161.92.111:44294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:42:55.938739 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 23:42:55.945486 systemd-logind[1857]: Session 26 logged out. Waiting for processes to exit. Jan 13 23:42:55.949854 systemd-logind[1857]: Removed session 26. Jan 13 23:42:57.397981 containerd[1894]: time="2026-01-13T23:42:57.397864029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 13 23:42:57.683491 containerd[1894]: time="2026-01-13T23:42:57.682495750Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:42:57.685202 containerd[1894]: time="2026-01-13T23:42:57.685102798Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 13 23:42:57.685359 containerd[1894]: time="2026-01-13T23:42:57.685123762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 13 23:42:57.685515 kubelet[3516]: E0113 23:42:57.685447 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:42:57.686094 kubelet[3516]: E0113 23:42:57.685521 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:42:57.686094 kubelet[3516]: E0113 23:42:57.685623 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xqbbk_calico-system(497e3e66-1726-45f4-a990-23061cc5868e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 13 23:42:57.686094 kubelet[3516]: E0113 23:42:57.685673 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xqbbk" podUID="497e3e66-1726-45f4-a990-23061cc5868e" Jan 13 23:42:58.397918 kubelet[3516]: E0113 23:42:58.397785 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p84n5" podUID="e305c05b-4fdf-40a3-854a-8a106f493072" Jan 13 23:43:00.394733 kubelet[3516]: E0113 23:43:00.394616 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7569cdf946-hmfxf" podUID="0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb" Jan 13 23:43:04.397621 containerd[1894]: time="2026-01-13T23:43:04.397554280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 13 23:43:04.652592 containerd[1894]: time="2026-01-13T23:43:04.652108193Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:43:04.654484 containerd[1894]: time="2026-01-13T23:43:04.654406925Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 13 23:43:04.655195 containerd[1894]: time="2026-01-13T23:43:04.654529313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 13 23:43:04.655479 kubelet[3516]: E0113 23:43:04.655049 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:43:04.655479 kubelet[3516]: E0113 23:43:04.655110 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:43:04.656765 kubelet[3516]: E0113 23:43:04.655518 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-d477b98f6-wrppl_calico-system(999f0f43-0933-4443-a84f-03be4dcf7cf6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 13 23:43:04.657479 containerd[1894]: time="2026-01-13T23:43:04.657322181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 13 23:43:04.934478 containerd[1894]: time="2026-01-13T23:43:04.934160646Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:43:04.936734 containerd[1894]: time="2026-01-13T23:43:04.936489150Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 13 23:43:04.936734 containerd[1894]: time="2026-01-13T23:43:04.936639966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 13 23:43:04.937125 kubelet[3516]: E0113 23:43:04.937039 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:43:04.937263 kubelet[3516]: E0113 23:43:04.937155 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:43:04.937443 kubelet[3516]: E0113 23:43:04.937338 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-d477b98f6-wrppl_calico-system(999f0f43-0933-4443-a84f-03be4dcf7cf6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 13 23:43:04.937515 kubelet[3516]: E0113 23:43:04.937436 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d477b98f6-wrppl" podUID="999f0f43-0933-4443-a84f-03be4dcf7cf6" Jan 13 23:43:06.397811 kubelet[3516]: E0113 23:43:06.397587 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6488775c94-xpm9n" podUID="c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a" Jan 13 23:43:09.695010 systemd[1]: cri-containerd-dcb94e6ab5e4627d550d78bdb9d6bcfd100abd9d6d366a99dd30a6c426ca7b4a.scope: Deactivated successfully. Jan 13 23:43:09.695678 systemd[1]: cri-containerd-dcb94e6ab5e4627d550d78bdb9d6bcfd100abd9d6d366a99dd30a6c426ca7b4a.scope: Consumed 5.902s CPU time, 59.7M memory peak, 128K read from disk. Jan 13 23:43:09.703673 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:43:09.703807 kernel: audit: type=1334 audit(1768347789.699:922): prog-id=262 op=LOAD Jan 13 23:43:09.699000 audit: BPF prog-id=262 op=LOAD Jan 13 23:43:09.702000 audit: BPF prog-id=96 op=UNLOAD Jan 13 23:43:09.706268 kernel: audit: type=1334 audit(1768347789.702:923): prog-id=96 op=UNLOAD Jan 13 23:43:09.703000 audit: BPF prog-id=114 op=UNLOAD Jan 13 23:43:09.708832 kernel: audit: type=1334 audit(1768347789.703:924): prog-id=114 op=UNLOAD Jan 13 23:43:09.708999 containerd[1894]: time="2026-01-13T23:43:09.708454126Z" level=info msg="received container exit event container_id:\"dcb94e6ab5e4627d550d78bdb9d6bcfd100abd9d6d366a99dd30a6c426ca7b4a\" id:\"dcb94e6ab5e4627d550d78bdb9d6bcfd100abd9d6d366a99dd30a6c426ca7b4a\" pid:3348 exit_status:1 exited_at:{seconds:1768347789 nanos:706916290}" Jan 13 23:43:09.711244 kernel: audit: type=1334 audit(1768347789.703:925): prog-id=118 op=UNLOAD Jan 13 23:43:09.703000 audit: BPF prog-id=118 op=UNLOAD Jan 13 23:43:09.757417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcb94e6ab5e4627d550d78bdb9d6bcfd100abd9d6d366a99dd30a6c426ca7b4a-rootfs.mount: Deactivated successfully. Jan 13 23:43:10.397854 containerd[1894]: time="2026-01-13T23:43:10.397490709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:43:10.415847 kubelet[3516]: I0113 23:43:10.415777 3516 scope.go:117] "RemoveContainer" containerID="dcb94e6ab5e4627d550d78bdb9d6bcfd100abd9d6d366a99dd30a6c426ca7b4a" Jan 13 23:43:10.421501 containerd[1894]: time="2026-01-13T23:43:10.420459801Z" level=info msg="CreateContainer within sandbox \"47f2db06e8680888abd15ae880e7305bcc12dcf346791b22493d6def59e646a0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 13 23:43:10.444475 containerd[1894]: time="2026-01-13T23:43:10.443097022Z" level=info msg="Container d2598ba8bc19c3b6ff630fd8504a83399947dadc4d72b0b836d8fd341de6689b: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:43:10.450220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount585207106.mount: Deactivated successfully. Jan 13 23:43:10.469381 containerd[1894]: time="2026-01-13T23:43:10.469303150Z" level=info msg="CreateContainer within sandbox \"47f2db06e8680888abd15ae880e7305bcc12dcf346791b22493d6def59e646a0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d2598ba8bc19c3b6ff630fd8504a83399947dadc4d72b0b836d8fd341de6689b\"" Jan 13 23:43:10.470454 containerd[1894]: time="2026-01-13T23:43:10.470367154Z" level=info msg="StartContainer for \"d2598ba8bc19c3b6ff630fd8504a83399947dadc4d72b0b836d8fd341de6689b\"" Jan 13 23:43:10.472876 containerd[1894]: time="2026-01-13T23:43:10.472764946Z" level=info msg="connecting to shim d2598ba8bc19c3b6ff630fd8504a83399947dadc4d72b0b836d8fd341de6689b" address="unix:///run/containerd/s/01396ed2c0f5fa05a475d30dd5edd9d8ffb06d3298da66d0385e5bd8a33274dd" protocol=ttrpc version=3 Jan 13 23:43:10.520236 systemd[1]: Started cri-containerd-d2598ba8bc19c3b6ff630fd8504a83399947dadc4d72b0b836d8fd341de6689b.scope - libcontainer container d2598ba8bc19c3b6ff630fd8504a83399947dadc4d72b0b836d8fd341de6689b. Jan 13 23:43:10.550000 audit: BPF prog-id=263 op=LOAD Jan 13 23:43:10.552000 audit: BPF prog-id=264 op=LOAD Jan 13 23:43:10.554816 kernel: audit: type=1334 audit(1768347790.550:926): prog-id=263 op=LOAD Jan 13 23:43:10.555098 kernel: audit: type=1334 audit(1768347790.552:927): prog-id=264 op=LOAD Jan 13 23:43:10.552000 audit[5975]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3134 pid=5975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:10.562019 kernel: audit: type=1300 audit(1768347790.552:927): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3134 pid=5975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:10.562110 kernel: audit: type=1327 audit(1768347790.552:927): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432353938626138626331396333623666663633306664383530346138 Jan 13 23:43:10.552000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432353938626138626331396333623666663633306664383530346138 Jan 13 23:43:10.553000 audit: BPF prog-id=264 op=UNLOAD Jan 13 23:43:10.569684 kernel: audit: type=1334 audit(1768347790.553:928): prog-id=264 op=UNLOAD Jan 13 23:43:10.553000 audit[5975]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3134 pid=5975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:10.576729 kernel: audit: type=1300 audit(1768347790.553:928): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3134 pid=5975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:10.553000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432353938626138626331396333623666663633306664383530346138 Jan 13 23:43:10.554000 audit: BPF prog-id=265 op=LOAD Jan 13 23:43:10.554000 audit[5975]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3134 pid=5975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:10.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432353938626138626331396333623666663633306664383530346138 Jan 13 23:43:10.554000 audit: BPF prog-id=266 op=LOAD Jan 13 23:43:10.554000 audit[5975]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3134 pid=5975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:10.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432353938626138626331396333623666663633306664383530346138 Jan 13 23:43:10.554000 audit: BPF prog-id=266 op=UNLOAD Jan 13 23:43:10.554000 audit[5975]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3134 pid=5975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:10.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432353938626138626331396333623666663633306664383530346138 Jan 13 23:43:10.554000 audit: BPF prog-id=265 op=UNLOAD Jan 13 23:43:10.554000 audit[5975]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3134 pid=5975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:10.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432353938626138626331396333623666663633306664383530346138 Jan 13 23:43:10.554000 audit: BPF prog-id=267 op=LOAD Jan 13 23:43:10.554000 audit[5975]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3134 pid=5975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:10.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432353938626138626331396333623666663633306664383530346138 Jan 13 23:43:10.628127 containerd[1894]: time="2026-01-13T23:43:10.628016867Z" level=info msg="StartContainer for \"d2598ba8bc19c3b6ff630fd8504a83399947dadc4d72b0b836d8fd341de6689b\" returns successfully" Jan 13 23:43:10.682961 containerd[1894]: time="2026-01-13T23:43:10.682532291Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:43:10.685049 containerd[1894]: time="2026-01-13T23:43:10.684941555Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:43:10.685442 containerd[1894]: time="2026-01-13T23:43:10.684957479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:43:10.685510 kubelet[3516]: E0113 23:43:10.685368 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:43:10.685510 kubelet[3516]: E0113 23:43:10.685423 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:43:10.686931 kubelet[3516]: E0113 23:43:10.685659 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7569cdf946-2r7qk_calico-apiserver(bc4e2f94-7e3c-446b-9bce-55e8c6abc38d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:43:10.686931 kubelet[3516]: E0113 23:43:10.685724 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7569cdf946-2r7qk" podUID="bc4e2f94-7e3c-446b-9bce-55e8c6abc38d" Jan 13 23:43:10.687164 containerd[1894]: time="2026-01-13T23:43:10.686624555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 13 23:43:10.986684 systemd[1]: cri-containerd-81ef87d2e5912f8f8555acc3ac6902587a705b8765b88b908d46c811b6104e40.scope: Deactivated successfully. Jan 13 23:43:10.988395 containerd[1894]: time="2026-01-13T23:43:10.988117344Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:43:10.989102 systemd[1]: cri-containerd-81ef87d2e5912f8f8555acc3ac6902587a705b8765b88b908d46c811b6104e40.scope: Consumed 36.373s CPU time, 112.5M memory peak. Jan 13 23:43:10.992764 containerd[1894]: time="2026-01-13T23:43:10.992009388Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 13 23:43:10.992000 audit: BPF prog-id=152 op=UNLOAD Jan 13 23:43:10.992000 audit: BPF prog-id=156 op=UNLOAD Jan 13 23:43:10.995401 kubelet[3516]: E0113 23:43:10.993698 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:43:10.995401 kubelet[3516]: E0113 23:43:10.993784 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:43:10.995401 kubelet[3516]: E0113 23:43:10.994158 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-p84n5_calico-system(e305c05b-4fdf-40a3-854a-8a106f493072): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 13 23:43:10.995765 containerd[1894]: time="2026-01-13T23:43:10.993101820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 13 23:43:10.997935 containerd[1894]: time="2026-01-13T23:43:10.996041220Z" level=info msg="received container exit event container_id:\"81ef87d2e5912f8f8555acc3ac6902587a705b8765b88b908d46c811b6104e40\" id:\"81ef87d2e5912f8f8555acc3ac6902587a705b8765b88b908d46c811b6104e40\" pid:3840 exit_status:1 exited_at:{seconds:1768347790 nanos:995298024}" Jan 13 23:43:10.999853 containerd[1894]: time="2026-01-13T23:43:10.999750120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 13 23:43:11.069084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81ef87d2e5912f8f8555acc3ac6902587a705b8765b88b908d46c811b6104e40-rootfs.mount: Deactivated successfully. Jan 13 23:43:11.248419 containerd[1894]: time="2026-01-13T23:43:11.247733938Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:43:11.250923 containerd[1894]: time="2026-01-13T23:43:11.250229086Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 13 23:43:11.250923 containerd[1894]: time="2026-01-13T23:43:11.250376326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 13 23:43:11.251137 kubelet[3516]: E0113 23:43:11.250871 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:43:11.251137 kubelet[3516]: E0113 23:43:11.251013 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:43:11.251384 kubelet[3516]: E0113 23:43:11.251329 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-p84n5_calico-system(e305c05b-4fdf-40a3-854a-8a106f493072): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 13 23:43:11.251478 kubelet[3516]: E0113 23:43:11.251415 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p84n5" podUID="e305c05b-4fdf-40a3-854a-8a106f493072" Jan 13 23:43:11.396337 containerd[1894]: time="2026-01-13T23:43:11.396275614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:43:11.396964 kubelet[3516]: E0113 23:43:11.396869 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xqbbk" podUID="497e3e66-1726-45f4-a990-23061cc5868e" Jan 13 23:43:11.434766 kubelet[3516]: I0113 23:43:11.434597 3516 scope.go:117] "RemoveContainer" containerID="81ef87d2e5912f8f8555acc3ac6902587a705b8765b88b908d46c811b6104e40" Jan 13 23:43:11.438681 containerd[1894]: time="2026-01-13T23:43:11.438603527Z" level=info msg="CreateContainer within sandbox \"bc7fd7e37b387e2d45d26f51cbb4118434a0c7d021446dd300ca7f6fb4bb93f1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 13 23:43:11.461934 containerd[1894]: time="2026-01-13T23:43:11.461282639Z" level=info msg="Container e5668487847cb4a119cc26a2bd5a7dfb377030cbb404dc7a94f6a29d89515074: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:43:11.487220 containerd[1894]: time="2026-01-13T23:43:11.487146167Z" level=info msg="CreateContainer within sandbox \"bc7fd7e37b387e2d45d26f51cbb4118434a0c7d021446dd300ca7f6fb4bb93f1\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"e5668487847cb4a119cc26a2bd5a7dfb377030cbb404dc7a94f6a29d89515074\"" Jan 13 23:43:11.489085 containerd[1894]: time="2026-01-13T23:43:11.488270063Z" level=info msg="StartContainer for \"e5668487847cb4a119cc26a2bd5a7dfb377030cbb404dc7a94f6a29d89515074\"" Jan 13 23:43:11.491580 containerd[1894]: time="2026-01-13T23:43:11.491489123Z" level=info msg="connecting to shim e5668487847cb4a119cc26a2bd5a7dfb377030cbb404dc7a94f6a29d89515074" address="unix:///run/containerd/s/04cae5463c0309a3f4188c39042ee08cf5bf6db9ce7b9f3b51ebb568eca402ea" protocol=ttrpc version=3 Jan 13 23:43:11.554330 systemd[1]: Started cri-containerd-e5668487847cb4a119cc26a2bd5a7dfb377030cbb404dc7a94f6a29d89515074.scope - libcontainer container e5668487847cb4a119cc26a2bd5a7dfb377030cbb404dc7a94f6a29d89515074. Jan 13 23:43:11.597000 audit: BPF prog-id=268 op=LOAD Jan 13 23:43:11.598000 audit: BPF prog-id=269 op=LOAD Jan 13 23:43:11.598000 audit[6017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3621 pid=6017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:11.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535363638343837383437636234613131396363323661326264356137 Jan 13 23:43:11.599000 audit: BPF prog-id=269 op=UNLOAD Jan 13 23:43:11.599000 audit[6017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3621 pid=6017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:11.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535363638343837383437636234613131396363323661326264356137 Jan 13 23:43:11.599000 audit: BPF prog-id=270 op=LOAD Jan 13 23:43:11.599000 audit[6017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3621 pid=6017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:11.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535363638343837383437636234613131396363323661326264356137 Jan 13 23:43:11.599000 audit: BPF prog-id=271 op=LOAD Jan 13 23:43:11.599000 audit[6017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3621 pid=6017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:11.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535363638343837383437636234613131396363323661326264356137 Jan 13 23:43:11.601000 audit: BPF prog-id=271 op=UNLOAD Jan 13 23:43:11.601000 audit[6017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3621 pid=6017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:11.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535363638343837383437636234613131396363323661326264356137 Jan 13 23:43:11.601000 audit: BPF prog-id=270 op=UNLOAD Jan 13 23:43:11.601000 audit[6017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3621 pid=6017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:11.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535363638343837383437636234613131396363323661326264356137 Jan 13 23:43:11.601000 audit: BPF prog-id=272 op=LOAD Jan 13 23:43:11.601000 audit[6017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3621 pid=6017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:11.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535363638343837383437636234613131396363323661326264356137 Jan 13 23:43:11.655306 containerd[1894]: time="2026-01-13T23:43:11.655237428Z" level=info msg="StartContainer for \"e5668487847cb4a119cc26a2bd5a7dfb377030cbb404dc7a94f6a29d89515074\" returns successfully" Jan 13 23:43:11.704661 containerd[1894]: time="2026-01-13T23:43:11.704588964Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:43:11.707925 containerd[1894]: time="2026-01-13T23:43:11.707043216Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:43:11.707925 containerd[1894]: time="2026-01-13T23:43:11.707106168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:43:11.708148 kubelet[3516]: E0113 23:43:11.707558 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:43:11.708148 kubelet[3516]: E0113 23:43:11.707618 3516 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:43:11.708148 kubelet[3516]: E0113 23:43:11.707733 3516 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7569cdf946-hmfxf_calico-apiserver(0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:43:11.708148 kubelet[3516]: E0113 23:43:11.707782 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7569cdf946-hmfxf" podUID="0af43ea4-f4bb-4d0b-8dc0-4fb300220dfb" Jan 13 23:43:15.005966 kubelet[3516]: E0113 23:43:15.005850 3516 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-24-127)" Jan 13 23:43:16.169450 systemd[1]: cri-containerd-f858271379e011f5ab580d948d5761eef57888f29145b12fe8a46c3203a691ec.scope: Deactivated successfully. Jan 13 23:43:16.170112 systemd[1]: cri-containerd-f858271379e011f5ab580d948d5761eef57888f29145b12fe8a46c3203a691ec.scope: Consumed 5.910s CPU time, 20.9M memory peak. Jan 13 23:43:16.176938 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 13 23:43:16.177063 kernel: audit: type=1334 audit(1768347796.171:944): prog-id=273 op=LOAD Jan 13 23:43:16.171000 audit: BPF prog-id=273 op=LOAD Jan 13 23:43:16.175000 audit: BPF prog-id=94 op=UNLOAD Jan 13 23:43:16.178937 kernel: audit: type=1334 audit(1768347796.175:945): prog-id=94 op=UNLOAD Jan 13 23:43:16.175000 audit: BPF prog-id=109 op=UNLOAD Jan 13 23:43:16.180753 kernel: audit: type=1334 audit(1768347796.175:946): prog-id=109 op=UNLOAD Jan 13 23:43:16.183293 kernel: audit: type=1334 audit(1768347796.175:947): prog-id=113 op=UNLOAD Jan 13 23:43:16.175000 audit: BPF prog-id=113 op=UNLOAD Jan 13 23:43:16.183444 containerd[1894]: time="2026-01-13T23:43:16.180860510Z" level=info msg="received container exit event container_id:\"f858271379e011f5ab580d948d5761eef57888f29145b12fe8a46c3203a691ec\" id:\"f858271379e011f5ab580d948d5761eef57888f29145b12fe8a46c3203a691ec\" pid:3352 exit_status:1 exited_at:{seconds:1768347796 nanos:179603162}" Jan 13 23:43:16.230610 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f858271379e011f5ab580d948d5761eef57888f29145b12fe8a46c3203a691ec-rootfs.mount: Deactivated successfully. Jan 13 23:43:16.461078 kubelet[3516]: I0113 23:43:16.460934 3516 scope.go:117] "RemoveContainer" containerID="f858271379e011f5ab580d948d5761eef57888f29145b12fe8a46c3203a691ec" Jan 13 23:43:16.466848 containerd[1894]: time="2026-01-13T23:43:16.466062064Z" level=info msg="CreateContainer within sandbox \"d3e6749d351c6f6f0c9f1c4f0066058c785a990b3c04530106740e56c6a178be\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 13 23:43:16.488422 containerd[1894]: time="2026-01-13T23:43:16.488334952Z" level=info msg="Container 1fcf48ef6d58d60ff364753bec3d285dfd55e6aaf21d3d8c796d5a98b28618c6: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:43:16.508262 containerd[1894]: time="2026-01-13T23:43:16.508177492Z" level=info msg="CreateContainer within sandbox \"d3e6749d351c6f6f0c9f1c4f0066058c785a990b3c04530106740e56c6a178be\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"1fcf48ef6d58d60ff364753bec3d285dfd55e6aaf21d3d8c796d5a98b28618c6\"" Jan 13 23:43:16.509604 containerd[1894]: time="2026-01-13T23:43:16.509075452Z" level=info msg="StartContainer for \"1fcf48ef6d58d60ff364753bec3d285dfd55e6aaf21d3d8c796d5a98b28618c6\"" Jan 13 23:43:16.511553 containerd[1894]: time="2026-01-13T23:43:16.511490680Z" level=info msg="connecting to shim 1fcf48ef6d58d60ff364753bec3d285dfd55e6aaf21d3d8c796d5a98b28618c6" address="unix:///run/containerd/s/d95bd25bed52b97ad7289c44747af69b7eacf264172bebdbdcc4f085cb66f266" protocol=ttrpc version=3 Jan 13 23:43:16.554277 systemd[1]: Started cri-containerd-1fcf48ef6d58d60ff364753bec3d285dfd55e6aaf21d3d8c796d5a98b28618c6.scope - libcontainer container 1fcf48ef6d58d60ff364753bec3d285dfd55e6aaf21d3d8c796d5a98b28618c6. Jan 13 23:43:16.578000 audit: BPF prog-id=274 op=LOAD Jan 13 23:43:16.578000 audit: BPF prog-id=275 op=LOAD Jan 13 23:43:16.582879 kernel: audit: type=1334 audit(1768347796.578:948): prog-id=274 op=LOAD Jan 13 23:43:16.583142 kernel: audit: type=1334 audit(1768347796.578:949): prog-id=275 op=LOAD Jan 13 23:43:16.583201 kernel: audit: type=1300 audit(1768347796.578:949): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3131 pid=6074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:16.578000 audit[6074]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3131 pid=6074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:16.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166636634386566366435386436306666333634373533626563336432 Jan 13 23:43:16.594955 kernel: audit: type=1327 audit(1768347796.578:949): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166636634386566366435386436306666333634373533626563336432 Jan 13 23:43:16.595071 kernel: audit: type=1334 audit(1768347796.580:950): prog-id=275 op=UNLOAD Jan 13 23:43:16.580000 audit: BPF prog-id=275 op=UNLOAD Jan 13 23:43:16.580000 audit[6074]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3131 pid=6074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:16.602368 kernel: audit: type=1300 audit(1768347796.580:950): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3131 pid=6074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:16.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166636634386566366435386436306666333634373533626563336432 Jan 13 23:43:16.580000 audit: BPF prog-id=276 op=LOAD Jan 13 23:43:16.580000 audit[6074]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3131 pid=6074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:16.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166636634386566366435386436306666333634373533626563336432 Jan 13 23:43:16.582000 audit: BPF prog-id=277 op=LOAD Jan 13 23:43:16.582000 audit[6074]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3131 pid=6074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:16.582000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166636634386566366435386436306666333634373533626563336432 Jan 13 23:43:16.588000 audit: BPF prog-id=277 op=UNLOAD Jan 13 23:43:16.588000 audit[6074]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3131 pid=6074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:16.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166636634386566366435386436306666333634373533626563336432 Jan 13 23:43:16.588000 audit: BPF prog-id=276 op=UNLOAD Jan 13 23:43:16.588000 audit[6074]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3131 pid=6074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:16.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166636634386566366435386436306666333634373533626563336432 Jan 13 23:43:16.588000 audit: BPF prog-id=278 op=LOAD Jan 13 23:43:16.588000 audit[6074]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3131 pid=6074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:43:16.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166636634386566366435386436306666333634373533626563336432 Jan 13 23:43:16.659524 containerd[1894]: time="2026-01-13T23:43:16.659463256Z" level=info msg="StartContainer for \"1fcf48ef6d58d60ff364753bec3d285dfd55e6aaf21d3d8c796d5a98b28618c6\" returns successfully" Jan 13 23:43:17.396073 kubelet[3516]: E0113 23:43:17.395992 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d477b98f6-wrppl" podUID="999f0f43-0933-4443-a84f-03be4dcf7cf6" Jan 13 23:43:18.396077 kubelet[3516]: E0113 23:43:18.396011 3516 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6488775c94-xpm9n" podUID="c516ddef-9b61-4137-ae4e-9c5e1b1d0b5a"