Jan 23 23:32:57.972184 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 23 23:32:57.972229 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Jan 23 21:37:22 -00 2026 Jan 23 23:32:57.972253 kernel: KASLR disabled due to lack of seed Jan 23 23:32:57.972270 kernel: efi: EFI v2.7 by EDK II Jan 23 23:32:57.972286 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78551598 Jan 23 23:32:57.972302 kernel: secureboot: Secure boot disabled Jan 23 23:32:57.972320 kernel: ACPI: Early table checksum verification disabled Jan 23 23:32:57.972336 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 23 23:32:57.972353 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 23 23:32:57.972373 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 23 23:32:57.972389 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 23 23:32:57.972406 kernel: ACPI: FACS 0x0000000078630000 000040 Jan 23 23:32:57.972422 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 23 23:32:57.972438 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 23 23:32:57.972461 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 23 23:32:57.972479 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 23 23:32:57.972496 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 23 23:32:57.972514 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 23 23:32:57.972531 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 23 23:32:57.972548 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 23 23:32:57.972565 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 23 23:32:57.972582 kernel: printk: legacy bootconsole [uart0] enabled Jan 23 23:32:57.972599 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 23 23:32:57.972617 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 23:32:57.972638 kernel: NODE_DATA(0) allocated [mem 0x4b584ea00-0x4b5855fff] Jan 23 23:32:57.972655 kernel: Zone ranges: Jan 23 23:32:57.972672 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 23:32:57.972689 kernel: DMA32 empty Jan 23 23:32:57.972706 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 23 23:32:57.977165 kernel: Device empty Jan 23 23:32:57.977191 kernel: Movable zone start for each node Jan 23 23:32:57.977209 kernel: Early memory node ranges Jan 23 23:32:57.977226 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 23 23:32:57.977243 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 23 23:32:57.977261 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 23 23:32:57.977277 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 23 23:32:57.977304 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 23 23:32:57.977321 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 23 23:32:57.977338 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 23 23:32:57.977355 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 23 23:32:57.977380 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 23:32:57.977402 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 23 23:32:57.977420 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Jan 23 23:32:57.977438 kernel: psci: probing for conduit method from ACPI. Jan 23 23:32:57.977456 kernel: psci: PSCIv1.0 detected in firmware. Jan 23 23:32:57.977474 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:32:57.977492 kernel: psci: Trusted OS migration not required Jan 23 23:32:57.977509 kernel: psci: SMC Calling Convention v1.1 Jan 23 23:32:57.977527 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 23 23:32:57.977545 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 23 23:32:57.977567 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 23 23:32:57.977585 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:32:57.977603 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:32:57.977621 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:32:57.977638 kernel: CPU features: detected: Spectre-v2 Jan 23 23:32:57.977656 kernel: CPU features: detected: Spectre-v3a Jan 23 23:32:57.977674 kernel: CPU features: detected: Spectre-BHB Jan 23 23:32:57.977692 kernel: CPU features: detected: ARM erratum 1742098 Jan 23 23:32:57.977757 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 23 23:32:57.977803 kernel: alternatives: applying boot alternatives Jan 23 23:32:57.977824 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0b7aa2947ffddc152dd47eebbcf7a95dcd57c97b69958c2bfdf6c1781ecaf3c1 Jan 23 23:32:57.977850 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:32:57.977868 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:32:57.977887 kernel: Fallback order for Node 0: 0 Jan 23 23:32:57.977904 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jan 23 23:32:57.977922 kernel: Policy zone: Normal Jan 23 23:32:57.977940 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:32:57.977958 kernel: software IO TLB: area num 2. Jan 23 23:32:57.977976 kernel: software IO TLB: mapped [mem 0x000000006f800000-0x0000000073800000] (64MB) Jan 23 23:32:57.977994 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:32:57.978011 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:32:57.978034 kernel: rcu: RCU event tracing is enabled. Jan 23 23:32:57.978053 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:32:57.978071 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:32:57.978089 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:32:57.978107 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:32:57.978125 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:32:57.978143 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:32:57.978161 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:32:57.978179 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:32:57.978197 kernel: GICv3: 96 SPIs implemented Jan 23 23:32:57.978214 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:32:57.978236 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:32:57.978253 kernel: GICv3: GICv3 features: 16 PPIs Jan 23 23:32:57.978271 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 23 23:32:57.978289 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 23 23:32:57.978307 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 23 23:32:57.978325 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jan 23 23:32:57.978344 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jan 23 23:32:57.978364 kernel: GICv3: using LPI property table @0x0000000400110000 Jan 23 23:32:57.978383 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 23 23:32:57.978402 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jan 23 23:32:57.978422 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:32:57.978445 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 23 23:32:57.978464 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 23 23:32:57.978483 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 23 23:32:57.978501 kernel: Console: colour dummy device 80x25 Jan 23 23:32:57.978521 kernel: printk: legacy console [tty1] enabled Jan 23 23:32:57.978540 kernel: ACPI: Core revision 20240827 Jan 23 23:32:57.978560 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 23 23:32:57.978579 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:32:57.978602 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 23:32:57.978621 kernel: landlock: Up and running. Jan 23 23:32:57.978640 kernel: SELinux: Initializing. Jan 23 23:32:57.978660 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:32:57.978679 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:32:57.978698 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:32:57.978742 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:32:57.978764 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 23:32:57.978791 kernel: Remapping and enabling EFI services. Jan 23 23:32:57.978810 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:32:57.978829 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:32:57.978848 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 23 23:32:57.978868 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jan 23 23:32:57.978887 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 23 23:32:57.978906 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:32:57.978929 kernel: SMP: Total of 2 processors activated. Jan 23 23:32:57.978948 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:32:57.978976 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:32:57.978999 kernel: CPU features: detected: 32-bit EL1 Support Jan 23 23:32:57.979019 kernel: CPU features: detected: CRC32 instructions Jan 23 23:32:57.979038 kernel: alternatives: applying system-wide alternatives Jan 23 23:32:57.979058 kernel: Memory: 3823340K/4030464K available (11200K kernel code, 2458K rwdata, 9092K rodata, 12480K init, 1038K bss, 185776K reserved, 16384K cma-reserved) Jan 23 23:32:57.979078 kernel: devtmpfs: initialized Jan 23 23:32:57.979101 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:32:57.979121 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:32:57.979140 kernel: 23632 pages in range for non-PLT usage Jan 23 23:32:57.979159 kernel: 515152 pages in range for PLT usage Jan 23 23:32:57.979178 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:32:57.979201 kernel: SMBIOS 3.0.0 present. Jan 23 23:32:57.979220 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 23 23:32:57.979240 kernel: DMI: Memory slots populated: 0/0 Jan 23 23:32:57.979259 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:32:57.979278 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:32:57.979298 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:32:57.979317 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:32:57.979340 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:32:57.979360 kernel: audit: type=2000 audit(0.224:1): state=initialized audit_enabled=0 res=1 Jan 23 23:32:57.979379 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:32:57.979398 kernel: cpuidle: using governor menu Jan 23 23:32:57.979418 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:32:57.979437 kernel: ASID allocator initialised with 65536 entries Jan 23 23:32:57.979456 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:32:57.979479 kernel: Serial: AMBA PL011 UART driver Jan 23 23:32:57.979498 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:32:57.979518 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:32:57.979537 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:32:57.979557 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:32:57.979576 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:32:57.979596 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:32:57.979619 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:32:57.979638 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:32:57.979658 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:32:57.979677 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:32:57.979696 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:32:57.986443 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:32:57.986758 kernel: ACPI: Interpreter enabled Jan 23 23:32:57.986794 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:32:57.986815 kernel: ACPI: MCFG table detected, 1 entries Jan 23 23:32:57.986834 kernel: ACPI: CPU0 has been hot-added Jan 23 23:32:57.986854 kernel: ACPI: CPU1 has been hot-added Jan 23 23:32:57.986873 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 23 23:32:57.987241 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 23:32:57.987515 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 23:32:57.987975 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 23:32:57.988280 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 23 23:32:57.992143 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 23 23:32:57.992187 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 23 23:32:57.992208 kernel: acpiphp: Slot [1] registered Jan 23 23:32:57.992229 kernel: acpiphp: Slot [2] registered Jan 23 23:32:57.992259 kernel: acpiphp: Slot [3] registered Jan 23 23:32:57.992279 kernel: acpiphp: Slot [4] registered Jan 23 23:32:57.992300 kernel: acpiphp: Slot [5] registered Jan 23 23:32:57.992320 kernel: acpiphp: Slot [6] registered Jan 23 23:32:57.992339 kernel: acpiphp: Slot [7] registered Jan 23 23:32:57.992358 kernel: acpiphp: Slot [8] registered Jan 23 23:32:57.992378 kernel: acpiphp: Slot [9] registered Jan 23 23:32:57.992397 kernel: acpiphp: Slot [10] registered Jan 23 23:32:57.992421 kernel: acpiphp: Slot [11] registered Jan 23 23:32:57.992440 kernel: acpiphp: Slot [12] registered Jan 23 23:32:57.992460 kernel: acpiphp: Slot [13] registered Jan 23 23:32:57.992479 kernel: acpiphp: Slot [14] registered Jan 23 23:32:57.992499 kernel: acpiphp: Slot [15] registered Jan 23 23:32:57.992518 kernel: acpiphp: Slot [16] registered Jan 23 23:32:57.992538 kernel: acpiphp: Slot [17] registered Jan 23 23:32:57.992561 kernel: acpiphp: Slot [18] registered Jan 23 23:32:57.992581 kernel: acpiphp: Slot [19] registered Jan 23 23:32:57.992600 kernel: acpiphp: Slot [20] registered Jan 23 23:32:57.992619 kernel: acpiphp: Slot [21] registered Jan 23 23:32:57.992639 kernel: acpiphp: Slot [22] registered Jan 23 23:32:57.992658 kernel: acpiphp: Slot [23] registered Jan 23 23:32:57.992677 kernel: acpiphp: Slot [24] registered Jan 23 23:32:57.992701 kernel: acpiphp: Slot [25] registered Jan 23 23:32:57.992747 kernel: acpiphp: Slot [26] registered Jan 23 23:32:57.992769 kernel: acpiphp: Slot [27] registered Jan 23 23:32:57.992789 kernel: acpiphp: Slot [28] registered Jan 23 23:32:57.992809 kernel: acpiphp: Slot [29] registered Jan 23 23:32:57.992829 kernel: acpiphp: Slot [30] registered Jan 23 23:32:57.992848 kernel: acpiphp: Slot [31] registered Jan 23 23:32:57.992868 kernel: PCI host bridge to bus 0000:00 Jan 23 23:32:57.993186 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 23 23:32:57.993450 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 23:32:57.998877 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 23 23:32:57.999187 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 23 23:32:57.999503 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jan 23 23:32:58.000017 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jan 23 23:32:58.000333 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jan 23 23:32:58.000631 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jan 23 23:32:58.001119 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jan 23 23:32:58.001409 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 23:32:58.001765 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jan 23 23:32:58.002054 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jan 23 23:32:58.002324 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jan 23 23:32:58.002599 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jan 23 23:32:58.002907 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 23:32:58.003156 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 23 23:32:58.003405 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 23:32:58.003641 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 23 23:32:58.003667 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 23:32:58.003688 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 23:32:58.003744 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 23:32:58.003770 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 23:32:58.003790 kernel: iommu: Default domain type: Translated Jan 23 23:32:58.003816 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:32:58.003836 kernel: efivars: Registered efivars operations Jan 23 23:32:58.003856 kernel: vgaarb: loaded Jan 23 23:32:58.003875 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:32:58.003895 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:32:58.003914 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:32:58.003934 kernel: pnp: PnP ACPI init Jan 23 23:32:58.004269 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 23 23:32:58.004301 kernel: pnp: PnP ACPI: found 1 devices Jan 23 23:32:58.004321 kernel: NET: Registered PF_INET protocol family Jan 23 23:32:58.004341 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:32:58.004361 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:32:58.004380 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:32:58.004400 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:32:58.004426 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:32:58.004446 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:32:58.004466 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:32:58.004486 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:32:58.004505 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:32:58.004524 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:32:58.004543 kernel: kvm [1]: HYP mode not available Jan 23 23:32:58.004568 kernel: Initialise system trusted keyrings Jan 23 23:32:58.004587 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:32:58.004606 kernel: Key type asymmetric registered Jan 23 23:32:58.004625 kernel: Asymmetric key parser 'x509' registered Jan 23 23:32:58.004645 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 23 23:32:58.004665 kernel: io scheduler mq-deadline registered Jan 23 23:32:58.004684 kernel: io scheduler kyber registered Jan 23 23:32:58.004772 kernel: io scheduler bfq registered Jan 23 23:32:58.005069 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 23 23:32:58.005097 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 23:32:58.005117 kernel: ACPI: button: Power Button [PWRB] Jan 23 23:32:58.005137 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 23 23:32:58.005156 kernel: ACPI: button: Sleep Button [SLPB] Jan 23 23:32:58.005182 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:32:58.005203 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 23:32:58.005465 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 23 23:32:58.005492 kernel: printk: legacy console [ttyS0] disabled Jan 23 23:32:58.005512 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 23 23:32:58.005532 kernel: printk: legacy console [ttyS0] enabled Jan 23 23:32:58.005551 kernel: printk: legacy bootconsole [uart0] disabled Jan 23 23:32:58.005575 kernel: thunder_xcv, ver 1.0 Jan 23 23:32:58.005594 kernel: thunder_bgx, ver 1.0 Jan 23 23:32:58.005613 kernel: nicpf, ver 1.0 Jan 23 23:32:58.005633 kernel: nicvf, ver 1.0 Jan 23 23:32:58.005962 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:32:58.006212 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:32:54 UTC (1769211174) Jan 23 23:32:58.006238 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:32:58.006264 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jan 23 23:32:58.006283 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:32:58.006303 kernel: watchdog: NMI not fully supported Jan 23 23:32:58.006322 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:32:58.006341 kernel: Segment Routing with IPv6 Jan 23 23:32:58.006360 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:32:58.006379 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:32:58.006402 kernel: Key type dns_resolver registered Jan 23 23:32:58.006421 kernel: registered taskstats version 1 Jan 23 23:32:58.006441 kernel: Loading compiled-in X.509 certificates Jan 23 23:32:58.006460 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: efe9666f272e2216f3315e00dc27df84b73ce009' Jan 23 23:32:58.006479 kernel: Demotion targets for Node 0: null Jan 23 23:32:58.006498 kernel: Key type .fscrypt registered Jan 23 23:32:58.006517 kernel: Key type fscrypt-provisioning registered Jan 23 23:32:58.006540 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:32:58.006560 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:32:58.006580 kernel: ima: No architecture policies found Jan 23 23:32:58.006599 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:32:58.006618 kernel: clk: Disabling unused clocks Jan 23 23:32:58.006637 kernel: PM: genpd: Disabling unused power domains Jan 23 23:32:58.006656 kernel: Freeing unused kernel memory: 12480K Jan 23 23:32:58.006675 kernel: Run /init as init process Jan 23 23:32:58.006698 kernel: with arguments: Jan 23 23:32:58.006790 kernel: /init Jan 23 23:32:58.006817 kernel: with environment: Jan 23 23:32:58.006837 kernel: HOME=/ Jan 23 23:32:58.006857 kernel: TERM=linux Jan 23 23:32:58.006878 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 23:32:58.007101 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 23 23:32:58.007307 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 23:32:58.007336 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 23:32:58.007356 kernel: GPT:25804799 != 33554431 Jan 23 23:32:58.007377 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 23:32:58.007396 kernel: GPT:25804799 != 33554431 Jan 23 23:32:58.007415 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 23:32:58.007441 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:32:58.007460 kernel: SCSI subsystem initialized Jan 23 23:32:58.007480 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:32:58.007499 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:32:58.007519 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 23:32:58.007539 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 23 23:32:58.007558 kernel: raid6: neonx8 gen() 6600 MB/s Jan 23 23:32:58.007581 kernel: raid6: neonx4 gen() 6489 MB/s Jan 23 23:32:58.007601 kernel: raid6: neonx2 gen() 5422 MB/s Jan 23 23:32:58.007620 kernel: raid6: neonx1 gen() 3953 MB/s Jan 23 23:32:58.007640 kernel: raid6: int64x8 gen() 3653 MB/s Jan 23 23:32:58.007659 kernel: raid6: int64x4 gen() 3706 MB/s Jan 23 23:32:58.007678 kernel: raid6: int64x2 gen() 3603 MB/s Jan 23 23:32:58.007698 kernel: raid6: int64x1 gen() 2742 MB/s Jan 23 23:32:58.007755 kernel: raid6: using algorithm neonx8 gen() 6600 MB/s Jan 23 23:32:58.007778 kernel: raid6: .... xor() 4634 MB/s, rmw enabled Jan 23 23:32:58.007797 kernel: raid6: using neon recovery algorithm Jan 23 23:32:58.007817 kernel: xor: measuring software checksum speed Jan 23 23:32:58.007837 kernel: 8regs : 12992 MB/sec Jan 23 23:32:58.007856 kernel: 32regs : 12137 MB/sec Jan 23 23:32:58.007876 kernel: arm64_neon : 9180 MB/sec Jan 23 23:32:58.007901 kernel: xor: using function: 8regs (12992 MB/sec) Jan 23 23:32:58.007921 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:32:58.007940 kernel: BTRFS: device fsid 21279126-4100-4897-a95e-923d96100946 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (220) Jan 23 23:32:58.007960 kernel: BTRFS info (device dm-0): first mount of filesystem 21279126-4100-4897-a95e-923d96100946 Jan 23 23:32:58.007980 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:32:58.007999 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 23:32:58.008019 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:32:58.008042 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 23:32:58.008062 kernel: loop: module loaded Jan 23 23:32:58.008098 kernel: loop0: detected capacity change from 0 to 91832 Jan 23 23:32:58.008123 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 23:32:58.008145 systemd[1]: Successfully made /usr/ read-only. Jan 23 23:32:58.008170 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 23:32:58.008199 systemd[1]: Detected virtualization amazon. Jan 23 23:32:58.008219 systemd[1]: Detected architecture arm64. Jan 23 23:32:58.008239 systemd[1]: Running in initrd. Jan 23 23:32:58.008260 systemd[1]: No hostname configured, using default hostname. Jan 23 23:32:58.008281 systemd[1]: Hostname set to . Jan 23 23:32:58.008302 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 23 23:32:58.008322 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:32:58.008347 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:32:58.008368 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:32:58.008388 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:32:58.008410 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:32:58.008432 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:32:58.008472 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:32:58.008494 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:32:58.008516 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:32:58.008537 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:32:58.008559 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 23:32:58.008583 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:32:58.008605 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:32:58.008626 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:32:58.008648 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:32:58.008670 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:32:58.008691 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:32:58.008746 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 23:32:58.008778 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:32:58.008801 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 23:32:58.008823 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:32:58.008845 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:32:58.008866 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:32:58.008888 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:32:58.008909 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:32:58.008935 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:32:58.008957 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:32:58.008978 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:32:58.009001 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 23:32:58.009023 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:32:58.009044 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:32:58.009066 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:32:58.009093 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:32:58.009116 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:32:58.009142 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:32:58.009164 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:32:58.009186 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:32:58.009260 systemd-journald[359]: Collecting audit messages is enabled. Jan 23 23:32:58.009310 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:32:58.009333 kernel: audit: type=1130 audit(1769211177.978:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:58.009355 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:32:58.009377 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:32:58.009401 systemd-journald[359]: Journal started Jan 23 23:32:58.009438 systemd-journald[359]: Runtime Journal (/run/log/journal/ec28b4043b39313ac4215c5536df3e77) is 8M, max 75.3M, 67.3M free. Jan 23 23:32:57.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:58.014360 kernel: Bridge firewalling registered Jan 23 23:32:58.014425 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:32:58.013298 systemd-modules-load[360]: Inserted module 'br_netfilter' Jan 23 23:32:58.020898 kernel: audit: type=1130 audit(1769211178.016:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:58.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:58.018491 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:32:58.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:58.040343 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:32:58.045883 kernel: audit: type=1130 audit(1769211178.027:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:58.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:58.053517 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:32:58.056612 kernel: audit: type=1130 audit(1769211178.047:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:58.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:58.064747 kernel: audit: type=1130 audit(1769211178.057:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:58.064897 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:32:58.073029 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:32:58.083009 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:32:58.146657 systemd-tmpfiles[382]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 23:32:58.161886 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:32:58.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:58.176797 kernel: audit: type=1130 audit(1769211178.160:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:58.178876 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:32:58.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:58.186000 audit: BPF prog-id=6 op=LOAD Jan 23 23:32:58.189367 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:32:58.190669 kernel: audit: type=1130 audit(1769211178.177:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:58.190734 kernel: audit: type=1334 audit(1769211178.186:9): prog-id=6 op=LOAD Jan 23 23:32:58.201206 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:32:58.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:58.213814 kernel: audit: type=1130 audit(1769211178.207:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:58.215871 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:32:58.256291 dracut-cmdline[399]: dracut-109 Jan 23 23:32:58.265319 dracut-cmdline[399]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0b7aa2947ffddc152dd47eebbcf7a95dcd57c97b69958c2bfdf6c1781ecaf3c1 Jan 23 23:32:58.351380 systemd-resolved[395]: Positive Trust Anchors: Jan 23 23:32:58.351416 systemd-resolved[395]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:32:58.351425 systemd-resolved[395]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 23:32:58.351483 systemd-resolved[395]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:32:58.529779 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:32:58.578966 kernel: iscsi: registered transport (tcp) Jan 23 23:32:58.624803 kernel: random: crng init done Jan 23 23:32:58.629062 systemd-resolved[395]: Defaulting to hostname 'linux'. Jan 23 23:32:58.634374 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:32:58.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:58.639405 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:32:58.652914 kernel: audit: type=1130 audit(1769211178.637:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:58.658176 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:32:58.658262 kernel: QLogic iSCSI HBA Driver Jan 23 23:32:58.698800 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 23:32:58.739084 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 23:32:58.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:58.747598 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 23:32:58.824659 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:32:58.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:58.832690 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:32:58.838823 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:32:58.898834 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:32:58.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:58.904000 audit: BPF prog-id=7 op=LOAD Jan 23 23:32:58.904000 audit: BPF prog-id=8 op=LOAD Jan 23 23:32:58.908117 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:32:58.974414 systemd-udevd[629]: Using default interface naming scheme 'v257'. Jan 23 23:32:58.996000 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:32:58.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:59.011975 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:32:59.063265 dracut-pre-trigger[705]: rd.md=0: removing MD RAID activation Jan 23 23:32:59.068251 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:32:59.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:59.074000 audit: BPF prog-id=9 op=LOAD Jan 23 23:32:59.077688 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:32:59.135216 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:32:59.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:59.142074 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:32:59.187773 systemd-networkd[744]: lo: Link UP Jan 23 23:32:59.188471 systemd-networkd[744]: lo: Gained carrier Jan 23 23:32:59.192355 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:32:59.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:59.198463 systemd[1]: Reached target network.target - Network. Jan 23 23:32:59.306558 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:32:59.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:59.320090 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:32:59.521102 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:32:59.523575 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:32:59.538606 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 23:32:59.538660 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 23 23:32:59.539087 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 23 23:32:59.539402 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 23 23:32:59.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:59.529628 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:32:59.539117 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:32:59.556746 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:56:ff:48:21:7f Jan 23 23:32:59.557164 (udev-worker)[786]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:32:59.574283 systemd-networkd[744]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 23:32:59.574304 systemd-networkd[744]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:32:59.588145 systemd-networkd[744]: eth0: Link UP Jan 23 23:32:59.588453 systemd-networkd[744]: eth0: Gained carrier Jan 23 23:32:59.588475 systemd-networkd[744]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 23:32:59.604849 systemd-networkd[744]: eth0: DHCPv4 address 172.31.23.100/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 23:32:59.618757 kernel: nvme nvme0: using unchecked data buffer Jan 23 23:32:59.622528 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:32:59.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:59.785268 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 23 23:32:59.850150 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 23:32:59.902491 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 23 23:32:59.905980 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:32:59.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:32:59.937526 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 23 23:32:59.963582 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:32:59.964724 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:32:59.965514 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:32:59.981764 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:32:59.992583 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:33:00.020502 disk-uuid[903]: Primary Header is updated. Jan 23 23:33:00.020502 disk-uuid[903]: Secondary Entries is updated. Jan 23 23:33:00.020502 disk-uuid[903]: Secondary Header is updated. Jan 23 23:33:00.113494 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:33:00.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:00.758905 systemd-networkd[744]: eth0: Gained IPv6LL Jan 23 23:33:01.155051 disk-uuid[904]: Warning: The kernel is still using the old partition table. Jan 23 23:33:01.155051 disk-uuid[904]: The new table will be used at the next reboot or after you Jan 23 23:33:01.155051 disk-uuid[904]: run partprobe(8) or kpartx(8) Jan 23 23:33:01.155051 disk-uuid[904]: The operation has completed successfully. Jan 23 23:33:01.177399 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:33:01.178071 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:33:01.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:01.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:01.189969 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:33:01.266748 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1087) Jan 23 23:33:01.271494 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2702f6ba-cc76-44c2-967b-e3e9acbe619a Jan 23 23:33:01.271572 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:33:01.280652 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:33:01.280743 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 23:33:01.290750 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 2702f6ba-cc76-44c2-967b-e3e9acbe619a Jan 23 23:33:01.293813 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:33:01.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:01.300978 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:33:02.464676 ignition[1106]: Ignition 2.24.0 Jan 23 23:33:02.464731 ignition[1106]: Stage: fetch-offline Jan 23 23:33:02.465153 ignition[1106]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:33:02.465185 ignition[1106]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:33:02.470542 ignition[1106]: Ignition finished successfully Jan 23 23:33:02.477558 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:33:02.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:02.485018 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:33:02.526802 ignition[1115]: Ignition 2.24.0 Jan 23 23:33:02.527328 ignition[1115]: Stage: fetch Jan 23 23:33:02.528133 ignition[1115]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:33:02.528157 ignition[1115]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:33:02.528295 ignition[1115]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:33:02.547537 ignition[1115]: PUT result: OK Jan 23 23:33:02.551972 ignition[1115]: parsed url from cmdline: "" Jan 23 23:33:02.552155 ignition[1115]: no config URL provided Jan 23 23:33:02.552186 ignition[1115]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:33:02.552224 ignition[1115]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:33:02.552263 ignition[1115]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:33:02.556832 ignition[1115]: PUT result: OK Jan 23 23:33:02.556958 ignition[1115]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 23 23:33:02.566205 ignition[1115]: GET result: OK Jan 23 23:33:02.566454 ignition[1115]: parsing config with SHA512: 8f601c8505d6f514b0a5a07e208f4a349831a10f810a775b9fe147154481cea033b3c5dd912ff317ed2aaa8ce80dd689877afeabc7774be5667302bf2cf75227 Jan 23 23:33:02.581271 unknown[1115]: fetched base config from "system" Jan 23 23:33:02.582407 unknown[1115]: fetched base config from "system" Jan 23 23:33:02.583198 unknown[1115]: fetched user config from "aws" Jan 23 23:33:02.584340 ignition[1115]: fetch: fetch complete Jan 23 23:33:02.584356 ignition[1115]: fetch: fetch passed Jan 23 23:33:02.584470 ignition[1115]: Ignition finished successfully Jan 23 23:33:02.597838 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:33:02.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:02.604637 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:33:02.650669 ignition[1121]: Ignition 2.24.0 Jan 23 23:33:02.650701 ignition[1121]: Stage: kargs Jan 23 23:33:02.651095 ignition[1121]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:33:02.651118 ignition[1121]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:33:02.652363 ignition[1121]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:33:02.654676 ignition[1121]: PUT result: OK Jan 23 23:33:02.667106 ignition[1121]: kargs: kargs passed Jan 23 23:33:02.667301 ignition[1121]: Ignition finished successfully Jan 23 23:33:02.672824 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:33:02.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:02.679582 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:33:02.735825 ignition[1127]: Ignition 2.24.0 Jan 23 23:33:02.735853 ignition[1127]: Stage: disks Jan 23 23:33:02.736249 ignition[1127]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:33:02.736271 ignition[1127]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:33:02.736487 ignition[1127]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:33:02.739335 ignition[1127]: PUT result: OK Jan 23 23:33:02.754815 ignition[1127]: disks: disks passed Jan 23 23:33:02.754951 ignition[1127]: Ignition finished successfully Jan 23 23:33:02.759631 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:33:02.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:02.761573 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:33:02.770186 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:33:02.775857 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:33:02.780573 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:33:02.783046 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:33:02.792436 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:33:02.914749 systemd-fsck[1135]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 23 23:33:02.920004 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:33:02.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:02.927240 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:33:03.205741 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 7d385daa-3990-4052-81b1-28f91f90f881 r/w with ordered data mode. Quota mode: none. Jan 23 23:33:03.208178 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:33:03.211426 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:33:03.281763 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:33:03.286191 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:33:03.290920 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 23:33:03.291198 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:33:03.291254 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:33:03.321348 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:33:03.329997 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:33:03.357937 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1154) Jan 23 23:33:03.362577 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2702f6ba-cc76-44c2-967b-e3e9acbe619a Jan 23 23:33:03.362660 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:33:03.372576 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:33:03.372661 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 23:33:03.375929 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:33:05.644321 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:33:05.656510 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 23 23:33:05.656598 kernel: audit: type=1130 audit(1769211185.649:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:05.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:05.659240 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:33:05.664096 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:33:05.704583 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:33:05.710776 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 2702f6ba-cc76-44c2-967b-e3e9acbe619a Jan 23 23:33:05.755796 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:33:05.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:05.766784 kernel: audit: type=1130 audit(1769211185.756:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:05.767750 ignition[1251]: INFO : Ignition 2.24.0 Jan 23 23:33:05.767750 ignition[1251]: INFO : Stage: mount Jan 23 23:33:05.772940 ignition[1251]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:33:05.772940 ignition[1251]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:33:05.772940 ignition[1251]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:33:05.772940 ignition[1251]: INFO : PUT result: OK Jan 23 23:33:05.791315 ignition[1251]: INFO : mount: mount passed Jan 23 23:33:05.793354 ignition[1251]: INFO : Ignition finished successfully Jan 23 23:33:05.798513 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:33:05.805270 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:33:05.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:05.814792 kernel: audit: type=1130 audit(1769211185.799:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:05.840437 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:33:05.889770 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1262) Jan 23 23:33:05.895213 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2702f6ba-cc76-44c2-967b-e3e9acbe619a Jan 23 23:33:05.895300 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:33:05.904580 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:33:05.904671 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 23:33:05.909357 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:33:05.976421 ignition[1279]: INFO : Ignition 2.24.0 Jan 23 23:33:05.978884 ignition[1279]: INFO : Stage: files Jan 23 23:33:05.978884 ignition[1279]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:33:05.978884 ignition[1279]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:33:05.978884 ignition[1279]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:33:05.990480 ignition[1279]: INFO : PUT result: OK Jan 23 23:33:06.001985 ignition[1279]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:33:06.008780 ignition[1279]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:33:06.008780 ignition[1279]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:33:06.049627 ignition[1279]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:33:06.053303 ignition[1279]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:33:06.057033 unknown[1279]: wrote ssh authorized keys file for user: core Jan 23 23:33:06.059571 ignition[1279]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:33:06.095903 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 23:33:06.100891 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 23 23:33:06.175972 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 23:33:06.339190 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 23:33:06.343773 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:33:06.343773 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:33:06.351658 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:33:06.351658 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:33:06.351658 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:33:06.363469 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:33:06.363469 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:33:06.363469 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:33:06.379586 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:33:06.384461 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:33:06.384461 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:33:06.384461 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:33:06.384461 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:33:06.384461 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 23 23:33:06.847313 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 23:33:07.251558 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:33:07.251558 ignition[1279]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 23:33:07.295145 ignition[1279]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:33:07.302760 ignition[1279]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:33:07.302760 ignition[1279]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 23:33:07.302760 ignition[1279]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 23:33:07.302760 ignition[1279]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 23:33:07.302760 ignition[1279]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:33:07.302760 ignition[1279]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:33:07.302760 ignition[1279]: INFO : files: files passed Jan 23 23:33:07.302760 ignition[1279]: INFO : Ignition finished successfully Jan 23 23:33:07.350964 kernel: audit: type=1130 audit(1769211187.311:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.306157 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:33:07.314630 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:33:07.329984 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:33:07.365244 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:33:07.365705 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:33:07.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.384323 kernel: audit: type=1130 audit(1769211187.373:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.384409 kernel: audit: type=1131 audit(1769211187.373:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.409816 initrd-setup-root-after-ignition[1311]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:33:07.414706 initrd-setup-root-after-ignition[1311]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:33:07.418396 initrd-setup-root-after-ignition[1315]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:33:07.424830 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:33:07.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.432351 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:33:07.442360 kernel: audit: type=1130 audit(1769211187.428:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.442801 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:33:07.539387 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:33:07.539825 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:33:07.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.548420 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:33:07.563150 kernel: audit: type=1130 audit(1769211187.546:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.563318 kernel: audit: type=1131 audit(1769211187.546:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.563054 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:33:07.566134 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:33:07.572473 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:33:07.616793 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:33:07.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.629499 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:33:07.635971 kernel: audit: type=1130 audit(1769211187.620:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.670326 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:33:07.670849 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:33:07.674623 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:33:07.682807 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:33:07.687194 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:33:07.687533 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:33:07.697010 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:33:07.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.699966 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:33:07.703912 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:33:07.711025 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:33:07.714545 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:33:07.719945 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 23:33:07.727741 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:33:07.733357 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:33:07.739345 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:33:07.742444 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:33:07.749920 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:33:07.752797 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:33:07.753121 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:33:07.762068 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:33:07.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.765379 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:33:07.773856 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:33:07.776142 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:33:07.779529 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:33:07.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.780182 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:33:07.790208 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:33:07.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.790502 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:33:07.793932 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:33:07.794311 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:33:07.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.810026 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:33:07.815080 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:33:07.815821 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:33:07.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.834082 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:33:07.843101 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:33:07.843831 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:33:07.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.849671 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:33:07.850008 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:33:07.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.859824 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:33:07.860189 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:33:07.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.890406 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:33:07.890621 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:33:07.900678 ignition[1335]: INFO : Ignition 2.24.0 Jan 23 23:33:07.900678 ignition[1335]: INFO : Stage: umount Jan 23 23:33:07.900678 ignition[1335]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:33:07.900678 ignition[1335]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:33:07.900678 ignition[1335]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:33:07.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.914946 ignition[1335]: INFO : PUT result: OK Jan 23 23:33:07.920636 ignition[1335]: INFO : umount: umount passed Jan 23 23:33:07.923638 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:33:07.927242 ignition[1335]: INFO : Ignition finished successfully Jan 23 23:33:07.932553 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:33:07.936924 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:33:07.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.944695 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:33:07.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.944866 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:33:07.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.948897 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:33:07.949021 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:33:07.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.953501 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:33:07.953619 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:33:07.962987 systemd[1]: Stopped target network.target - Network. Jan 23 23:33:07.966530 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:33:07.966648 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:33:07.970130 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:33:07.977059 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:33:07.982603 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:33:07.987646 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:33:07.990575 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:33:08.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.994552 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:33:07.994639 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:33:08.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:07.997998 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:33:07.998080 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:33:08.005321 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 23 23:33:08.005387 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 23 23:33:08.013531 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:33:08.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:08.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:08.013652 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:33:08.022554 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:33:08.022669 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:33:08.026390 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:33:08.033524 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:33:08.040683 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:33:08.040916 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:33:08.045966 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:33:08.046178 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:33:08.084197 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:33:08.084749 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:33:08.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:08.094547 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:33:08.093000 audit: BPF prog-id=6 op=UNLOAD Jan 23 23:33:08.095138 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:33:08.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:08.107103 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 23:33:08.105000 audit: BPF prog-id=9 op=UNLOAD Jan 23 23:33:08.112608 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:33:08.112739 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:33:08.121247 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:33:08.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:08.123480 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:33:08.123631 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:33:08.127294 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:33:08.127435 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:33:08.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:08.145890 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:33:08.146014 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:33:08.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:08.153779 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:33:08.178869 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:33:08.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:08.179148 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:33:08.182927 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:33:08.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:08.183054 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:33:08.187452 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:33:08.187544 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:33:08.194546 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:33:08.194670 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:33:08.211696 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:33:08.211889 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:33:08.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:08.224519 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:33:08.224648 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:33:08.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:08.236059 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:33:08.244026 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 23:33:08.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:08.244226 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 23:33:08.248371 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:33:08.248500 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:33:08.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:08.263616 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:33:08.264755 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:33:08.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:08.289637 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:33:08.291897 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:33:08.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:08.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:08.301305 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:33:08.301788 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:33:08.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:08.311025 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:33:08.317496 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:33:08.362074 systemd[1]: Switching root. Jan 23 23:33:08.410270 systemd-journald[359]: Journal stopped Jan 23 23:33:12.649956 systemd-journald[359]: Received SIGTERM from PID 1 (systemd). Jan 23 23:33:12.650094 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 23:33:12.650156 kernel: SELinux: policy capability open_perms=1 Jan 23 23:33:12.650199 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 23:33:12.650240 kernel: SELinux: policy capability always_check_network=0 Jan 23 23:33:12.650271 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 23:33:12.650316 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 23:33:12.650351 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 23:33:12.650385 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 23:33:12.650427 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 23:33:12.650459 systemd[1]: Successfully loaded SELinux policy in 128.553ms. Jan 23 23:33:12.650507 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.638ms. Jan 23 23:33:12.650546 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 23:33:12.650584 systemd[1]: Detected virtualization amazon. Jan 23 23:33:12.650623 systemd[1]: Detected architecture arm64. Jan 23 23:33:12.650657 systemd[1]: Detected first boot. Jan 23 23:33:12.650690 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 23 23:33:12.650762 zram_generator::config[1378]: No configuration found. Jan 23 23:33:12.650807 kernel: NET: Registered PF_VSOCK protocol family Jan 23 23:33:12.650846 systemd[1]: Populated /etc with preset unit settings. Jan 23 23:33:12.650883 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 23 23:33:12.650916 kernel: audit: type=1334 audit(1769211191.765:88): prog-id=12 op=LOAD Jan 23 23:33:12.650948 kernel: audit: type=1334 audit(1769211191.765:89): prog-id=3 op=UNLOAD Jan 23 23:33:12.650977 kernel: audit: type=1334 audit(1769211191.766:90): prog-id=13 op=LOAD Jan 23 23:33:12.651017 kernel: audit: type=1334 audit(1769211191.768:91): prog-id=14 op=LOAD Jan 23 23:33:12.651047 kernel: audit: type=1334 audit(1769211191.768:92): prog-id=4 op=UNLOAD Jan 23 23:33:12.651081 kernel: audit: type=1334 audit(1769211191.768:93): prog-id=5 op=UNLOAD Jan 23 23:33:12.651113 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 23:33:12.651149 kernel: audit: type=1131 audit(1769211191.771:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.651183 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 23:33:12.651219 kernel: audit: type=1334 audit(1769211191.785:95): prog-id=12 op=UNLOAD Jan 23 23:33:12.651253 kernel: audit: type=1130 audit(1769211191.789:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.651289 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 23:33:12.651324 kernel: audit: type=1131 audit(1769211191.789:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.651360 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 23:33:12.651396 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 23:33:12.651523 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 23:33:12.651564 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 23:33:12.651596 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 23:33:12.651636 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 23:33:12.651764 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 23:33:12.662080 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 23:33:12.662161 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:33:12.662203 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:33:12.662240 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 23:33:12.662289 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 23:33:12.662324 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 23:33:12.662362 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:33:12.662396 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 23:33:12.662431 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:33:12.662467 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:33:12.662502 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 23:33:12.662539 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 23:33:12.662572 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 23:33:12.662604 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 23:33:12.662640 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:33:12.662671 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:33:12.662701 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 23 23:33:12.662772 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:33:12.662814 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:33:12.662847 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 23:33:12.662881 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 23:33:12.662916 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 23:33:12.662947 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 23:33:12.662977 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 23 23:33:12.663009 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:33:12.663044 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 23 23:33:12.663077 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 23 23:33:12.663108 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:33:12.663137 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:33:12.663168 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 23:33:12.663201 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 23:33:12.663234 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 23:33:12.663271 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 23:33:12.663304 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 23:33:12.663336 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 23:33:12.663367 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 23:33:12.663400 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 23:33:12.663436 systemd[1]: Reached target machines.target - Containers. Jan 23 23:33:12.663468 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 23:33:12.663506 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:33:12.663537 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:33:12.663571 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 23:33:12.663602 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:33:12.663633 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:33:12.663663 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:33:12.663693 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 23:33:12.668840 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:33:12.668901 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 23:33:12.668934 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 23:33:12.668967 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 23:33:12.669002 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 23:33:12.669033 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 23:33:12.669070 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 23:33:12.669109 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:33:12.669141 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:33:12.669172 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 23:33:12.669206 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 23:33:12.669244 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 23:33:12.669277 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:33:12.669310 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 23:33:12.669343 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 23:33:12.669375 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 23:33:12.669408 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 23:33:12.669444 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 23:33:12.669476 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 23:33:12.669512 kernel: fuse: init (API version 7.41) Jan 23 23:33:12.669543 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:33:12.669578 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 23:33:12.669610 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 23:33:12.669643 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:33:12.669679 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:33:12.669755 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:33:12.669794 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:33:12.669827 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 23:33:12.669859 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 23:33:12.669901 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:33:12.669934 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:33:12.669968 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:33:12.670000 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 23:33:12.670037 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 23:33:12.670069 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:33:12.670100 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:33:12.670136 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 23:33:12.670167 kernel: ACPI: bus type drm_connector registered Jan 23 23:33:12.670198 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 23:33:12.670238 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:33:12.670273 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:33:12.670309 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 23:33:12.670344 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 23:33:12.670376 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 23:33:12.670410 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 23:33:12.670442 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 23 23:33:12.670472 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 23:33:12.670509 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:33:12.670544 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 23:33:12.670631 systemd-journald[1456]: Collecting audit messages is enabled. Jan 23 23:33:12.670687 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:33:12.672995 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 23:33:12.673065 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 23:33:12.673103 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:33:12.673137 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 23:33:12.673174 systemd-journald[1456]: Journal started Jan 23 23:33:12.673227 systemd-journald[1456]: Runtime Journal (/run/log/journal/ec28b4043b39313ac4215c5536df3e77) is 8M, max 75.3M, 67.3M free. Jan 23 23:33:11.955000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 23 23:33:12.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.264000 audit: BPF prog-id=14 op=UNLOAD Jan 23 23:33:12.264000 audit: BPF prog-id=13 op=UNLOAD Jan 23 23:33:12.267000 audit: BPF prog-id=15 op=LOAD Jan 23 23:33:12.267000 audit: BPF prog-id=16 op=LOAD Jan 23 23:33:12.268000 audit: BPF prog-id=17 op=LOAD Jan 23 23:33:12.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.642000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 23 23:33:12.642000 audit[1456]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffe4708d60 a2=4000 a3=0 items=0 ppid=1 pid=1456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:12.642000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 23 23:33:11.756017 systemd[1]: Queued start job for default target multi-user.target. Jan 23 23:33:11.771596 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 23:33:11.772751 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 23:33:12.688470 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 23:33:12.693020 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:33:12.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.725946 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:33:12.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.760643 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 23:33:12.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.772504 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 23:33:12.782976 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 23:33:12.788079 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 23:33:12.819765 kernel: loop1: detected capacity change from 0 to 61504 Jan 23 23:33:12.833884 systemd-journald[1456]: Time spent on flushing to /var/log/journal/ec28b4043b39313ac4215c5536df3e77 is 100.906ms for 1058 entries. Jan 23 23:33:12.833884 systemd-journald[1456]: System Journal (/var/log/journal/ec28b4043b39313ac4215c5536df3e77) is 8M, max 588.1M, 580.1M free. Jan 23 23:33:12.958105 systemd-journald[1456]: Received client request to flush runtime journal. Jan 23 23:33:12.958199 kernel: loop2: detected capacity change from 0 to 45344 Jan 23 23:33:12.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:12.890033 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:33:12.899325 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 23:33:12.913922 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 23:33:12.965866 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 23:33:12.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:13.002414 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 23:33:13.006940 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 23:33:13.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:13.049040 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 23:33:13.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:13.056000 audit: BPF prog-id=18 op=LOAD Jan 23 23:33:13.056000 audit: BPF prog-id=19 op=LOAD Jan 23 23:33:13.056000 audit: BPF prog-id=20 op=LOAD Jan 23 23:33:13.062017 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 23 23:33:13.064000 audit: BPF prog-id=21 op=LOAD Jan 23 23:33:13.068262 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:33:13.077128 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:33:13.094000 audit: BPF prog-id=22 op=LOAD Jan 23 23:33:13.094000 audit: BPF prog-id=23 op=LOAD Jan 23 23:33:13.095000 audit: BPF prog-id=24 op=LOAD Jan 23 23:33:13.098648 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 23:33:13.105000 audit: BPF prog-id=25 op=LOAD Jan 23 23:33:13.105000 audit: BPF prog-id=26 op=LOAD Jan 23 23:33:13.107000 audit: BPF prog-id=27 op=LOAD Jan 23 23:33:13.112201 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 23 23:33:13.258102 systemd-tmpfiles[1536]: ACLs are not supported, ignoring. Jan 23 23:33:13.258140 systemd-tmpfiles[1536]: ACLs are not supported, ignoring. Jan 23 23:33:13.269476 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 23:33:13.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:13.287990 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:33:13.289483 systemd-nsresourced[1538]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 23 23:33:13.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:13.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:13.314923 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 23 23:33:13.334773 kernel: loop3: detected capacity change from 0 to 100192 Jan 23 23:33:13.485026 systemd-oomd[1534]: No swap; memory pressure usage will be degraded Jan 23 23:33:13.486608 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 23 23:33:13.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:13.553587 systemd-resolved[1535]: Positive Trust Anchors: Jan 23 23:33:13.554294 systemd-resolved[1535]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:33:13.554454 systemd-resolved[1535]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 23:33:13.554626 systemd-resolved[1535]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:33:13.570967 systemd-resolved[1535]: Defaulting to hostname 'linux'. Jan 23 23:33:13.573697 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:33:13.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:13.576771 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:33:13.693775 kernel: loop4: detected capacity change from 0 to 211168 Jan 23 23:33:13.747773 kernel: loop5: detected capacity change from 0 to 61504 Jan 23 23:33:13.777781 kernel: loop6: detected capacity change from 0 to 45344 Jan 23 23:33:13.804845 kernel: loop7: detected capacity change from 0 to 100192 Jan 23 23:33:13.832767 kernel: loop1: detected capacity change from 0 to 211168 Jan 23 23:33:13.870110 (sd-merge)[1559]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Jan 23 23:33:13.878507 (sd-merge)[1559]: Merged extensions into '/usr'. Jan 23 23:33:13.895438 systemd[1]: Reload requested from client PID 1490 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 23:33:13.895469 systemd[1]: Reloading... Jan 23 23:33:14.084933 zram_generator::config[1595]: No configuration found. Jan 23 23:33:14.534549 systemd[1]: Reloading finished in 638 ms. Jan 23 23:33:14.557883 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 23:33:14.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:14.563814 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 23:33:14.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:14.584424 systemd[1]: Starting ensure-sysext.service... Jan 23 23:33:14.591040 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:33:14.592000 audit: BPF prog-id=8 op=UNLOAD Jan 23 23:33:14.592000 audit: BPF prog-id=7 op=UNLOAD Jan 23 23:33:14.595000 audit: BPF prog-id=28 op=LOAD Jan 23 23:33:14.595000 audit: BPF prog-id=29 op=LOAD Jan 23 23:33:14.600267 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:33:14.603000 audit: BPF prog-id=30 op=LOAD Jan 23 23:33:14.607000 audit: BPF prog-id=15 op=UNLOAD Jan 23 23:33:14.607000 audit: BPF prog-id=31 op=LOAD Jan 23 23:33:14.608000 audit: BPF prog-id=32 op=LOAD Jan 23 23:33:14.608000 audit: BPF prog-id=16 op=UNLOAD Jan 23 23:33:14.608000 audit: BPF prog-id=17 op=UNLOAD Jan 23 23:33:14.609000 audit: BPF prog-id=33 op=LOAD Jan 23 23:33:14.611000 audit: BPF prog-id=21 op=UNLOAD Jan 23 23:33:14.613000 audit: BPF prog-id=34 op=LOAD Jan 23 23:33:14.614000 audit: BPF prog-id=25 op=UNLOAD Jan 23 23:33:14.614000 audit: BPF prog-id=35 op=LOAD Jan 23 23:33:14.614000 audit: BPF prog-id=36 op=LOAD Jan 23 23:33:14.615000 audit: BPF prog-id=26 op=UNLOAD Jan 23 23:33:14.615000 audit: BPF prog-id=27 op=UNLOAD Jan 23 23:33:14.621000 audit: BPF prog-id=37 op=LOAD Jan 23 23:33:14.630000 audit: BPF prog-id=18 op=UNLOAD Jan 23 23:33:14.630000 audit: BPF prog-id=38 op=LOAD Jan 23 23:33:14.631000 audit: BPF prog-id=39 op=LOAD Jan 23 23:33:14.631000 audit: BPF prog-id=19 op=UNLOAD Jan 23 23:33:14.631000 audit: BPF prog-id=20 op=UNLOAD Jan 23 23:33:14.634000 audit: BPF prog-id=40 op=LOAD Jan 23 23:33:14.634000 audit: BPF prog-id=22 op=UNLOAD Jan 23 23:33:14.635000 audit: BPF prog-id=41 op=LOAD Jan 23 23:33:14.635000 audit: BPF prog-id=42 op=LOAD Jan 23 23:33:14.635000 audit: BPF prog-id=23 op=UNLOAD Jan 23 23:33:14.635000 audit: BPF prog-id=24 op=UNLOAD Jan 23 23:33:14.655690 systemd[1]: Reload requested from client PID 1641 ('systemctl') (unit ensure-sysext.service)... Jan 23 23:33:14.656018 systemd[1]: Reloading... Jan 23 23:33:14.698888 systemd-tmpfiles[1642]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 23:33:14.698981 systemd-tmpfiles[1642]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 23:33:14.699680 systemd-tmpfiles[1642]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 23:33:14.708088 systemd-tmpfiles[1642]: ACLs are not supported, ignoring. Jan 23 23:33:14.708285 systemd-tmpfiles[1642]: ACLs are not supported, ignoring. Jan 23 23:33:14.714871 systemd-udevd[1643]: Using default interface naming scheme 'v257'. Jan 23 23:33:14.754438 systemd-tmpfiles[1642]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:33:14.754475 systemd-tmpfiles[1642]: Skipping /boot Jan 23 23:33:14.795276 systemd-tmpfiles[1642]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:33:14.795312 systemd-tmpfiles[1642]: Skipping /boot Jan 23 23:33:14.873751 zram_generator::config[1678]: No configuration found. Jan 23 23:33:15.104419 (udev-worker)[1740]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:33:15.490087 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 23:33:15.491565 systemd[1]: Reloading finished in 834 ms. Jan 23 23:33:15.534916 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:33:15.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:15.582917 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:33:15.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:15.593000 audit: BPF prog-id=43 op=LOAD Jan 23 23:33:15.593000 audit: BPF prog-id=44 op=LOAD Jan 23 23:33:15.593000 audit: BPF prog-id=45 op=LOAD Jan 23 23:33:15.595000 audit: BPF prog-id=46 op=LOAD Jan 23 23:33:15.596000 audit: BPF prog-id=40 op=UNLOAD Jan 23 23:33:15.596000 audit: BPF prog-id=41 op=UNLOAD Jan 23 23:33:15.596000 audit: BPF prog-id=42 op=UNLOAD Jan 23 23:33:15.598000 audit: BPF prog-id=34 op=UNLOAD Jan 23 23:33:15.598000 audit: BPF prog-id=47 op=LOAD Jan 23 23:33:15.598000 audit: BPF prog-id=48 op=LOAD Jan 23 23:33:15.598000 audit: BPF prog-id=35 op=UNLOAD Jan 23 23:33:15.598000 audit: BPF prog-id=36 op=UNLOAD Jan 23 23:33:15.599000 audit: BPF prog-id=49 op=LOAD Jan 23 23:33:15.599000 audit: BPF prog-id=50 op=LOAD Jan 23 23:33:15.599000 audit: BPF prog-id=28 op=UNLOAD Jan 23 23:33:15.599000 audit: BPF prog-id=29 op=UNLOAD Jan 23 23:33:15.601000 audit: BPF prog-id=51 op=LOAD Jan 23 23:33:15.602000 audit: BPF prog-id=37 op=UNLOAD Jan 23 23:33:15.602000 audit: BPF prog-id=52 op=LOAD Jan 23 23:33:15.603000 audit: BPF prog-id=53 op=LOAD Jan 23 23:33:15.603000 audit: BPF prog-id=38 op=UNLOAD Jan 23 23:33:15.603000 audit: BPF prog-id=39 op=UNLOAD Jan 23 23:33:15.604000 audit: BPF prog-id=54 op=LOAD Jan 23 23:33:15.606000 audit: BPF prog-id=30 op=UNLOAD Jan 23 23:33:15.606000 audit: BPF prog-id=55 op=LOAD Jan 23 23:33:15.606000 audit: BPF prog-id=56 op=LOAD Jan 23 23:33:15.606000 audit: BPF prog-id=31 op=UNLOAD Jan 23 23:33:15.606000 audit: BPF prog-id=32 op=UNLOAD Jan 23 23:33:15.610000 audit: BPF prog-id=57 op=LOAD Jan 23 23:33:15.612000 audit: BPF prog-id=33 op=UNLOAD Jan 23 23:33:15.690196 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 23:33:15.699913 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 23:33:15.704216 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:33:15.708366 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:33:15.716432 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:33:15.758915 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:33:15.761739 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:33:15.762168 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 23:33:15.767469 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 23:33:15.770208 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 23:33:15.775679 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 23:33:15.779000 audit: BPF prog-id=58 op=LOAD Jan 23 23:33:15.786273 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:33:15.792533 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 23:33:15.800661 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:33:15.809645 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:33:15.811838 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:33:15.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:15.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:15.817503 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:33:15.818137 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:33:15.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:15.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:15.844464 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:33:15.861913 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:33:15.878482 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:33:15.881569 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:33:15.882046 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 23:33:15.882348 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 23:33:15.894340 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:33:15.898138 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:33:15.900917 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:33:15.901321 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 23:33:15.901867 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 23:33:15.902258 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 23:33:15.926939 systemd[1]: Finished ensure-sysext.service. Jan 23 23:33:15.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:16.004353 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:33:16.007844 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:33:16.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:16.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:16.017000 audit[1782]: SYSTEM_BOOT pid=1782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 23 23:33:16.027957 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:33:16.028975 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:33:16.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:16.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:16.041641 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 23:33:16.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:16.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:16.046224 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 23:33:16.107586 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:33:16.108864 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:33:16.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:16.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:16.115443 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:33:16.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:16.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:16.116142 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:33:16.119823 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:33:16.119958 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:33:16.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:16.147312 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 23:33:16.152325 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 23:33:16.152000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 23 23:33:16.152000 audit[1838]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff6447ce0 a2=420 a3=0 items=0 ppid=1774 pid=1838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:16.152000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 23:33:16.155377 augenrules[1838]: No rules Jan 23 23:33:16.160462 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 23:33:16.164932 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 23:33:16.417106 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 23:33:16.424052 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 23:33:16.442939 systemd-networkd[1781]: lo: Link UP Jan 23 23:33:16.442962 systemd-networkd[1781]: lo: Gained carrier Jan 23 23:33:16.446934 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:33:16.449818 systemd-networkd[1781]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 23:33:16.449826 systemd-networkd[1781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:33:16.450921 systemd[1]: Reached target network.target - Network. Jan 23 23:33:16.457221 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 23:33:16.462118 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 23:33:16.470662 systemd-networkd[1781]: eth0: Link UP Jan 23 23:33:16.473059 systemd-networkd[1781]: eth0: Gained carrier Jan 23 23:33:16.473116 systemd-networkd[1781]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 23:33:16.480559 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:33:16.494136 systemd-networkd[1781]: eth0: DHCPv4 address 172.31.23.100/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 23:33:16.527872 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 23:33:16.534864 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 23:33:17.846893 systemd-networkd[1781]: eth0: Gained IPv6LL Jan 23 23:33:17.851514 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 23:33:17.857159 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 23:33:19.078235 ldconfig[1779]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 23:33:19.086962 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 23:33:19.093169 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 23:33:19.128866 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 23:33:19.132350 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:33:19.135387 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 23:33:19.138661 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 23:33:19.142143 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 23:33:19.145290 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 23:33:19.148383 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 23 23:33:19.151397 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 23 23:33:19.154078 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 23:33:19.156878 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 23:33:19.156929 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:33:19.158937 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:33:19.163032 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 23:33:19.168957 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 23:33:19.176018 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 23:33:19.179464 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 23:33:19.182937 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 23:33:19.189499 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 23:33:19.192689 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 23:33:19.197020 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 23:33:19.199834 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:33:19.202107 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:33:19.204624 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:33:19.204880 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:33:19.207044 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 23:33:19.218040 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 23:33:19.226906 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 23:33:19.237044 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 23:33:19.256356 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 23:33:19.261146 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 23:33:19.264019 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 23:33:19.269143 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:33:19.274985 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 23:33:19.283219 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 23:33:19.293789 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 23:33:19.302100 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 23:33:19.315850 jq[1924]: false Jan 23 23:33:19.312446 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 23:33:19.324046 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 23:33:19.336499 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 23:33:19.348149 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 23:33:19.350598 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 23:33:19.351507 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 23:33:19.353144 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 23:33:19.362341 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 23:33:19.374181 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 23:33:19.378421 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 23:33:19.378905 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 23:33:19.420970 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 23:33:19.421835 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 23:33:19.479216 jq[1936]: true Jan 23 23:33:19.505347 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 23:33:19.506298 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 23:33:19.535084 extend-filesystems[1925]: Found /dev/nvme0n1p6 Jan 23 23:33:19.580541 ntpd[1928]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 21:06:10 UTC 2026 (1): Starting Jan 23 23:33:19.582329 ntpd[1928]: 23 Jan 23:33:19 ntpd[1928]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 21:06:10 UTC 2026 (1): Starting Jan 23 23:33:19.582329 ntpd[1928]: 23 Jan 23:33:19 ntpd[1928]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 23:33:19.582329 ntpd[1928]: 23 Jan 23:33:19 ntpd[1928]: ---------------------------------------------------- Jan 23 23:33:19.582329 ntpd[1928]: 23 Jan 23:33:19 ntpd[1928]: ntp-4 is maintained by Network Time Foundation, Jan 23 23:33:19.582329 ntpd[1928]: 23 Jan 23:33:19 ntpd[1928]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 23:33:19.582329 ntpd[1928]: 23 Jan 23:33:19 ntpd[1928]: corporation. Support and training for ntp-4 are Jan 23 23:33:19.582329 ntpd[1928]: 23 Jan 23:33:19 ntpd[1928]: available at https://www.nwtime.org/support Jan 23 23:33:19.582329 ntpd[1928]: 23 Jan 23:33:19 ntpd[1928]: ---------------------------------------------------- Jan 23 23:33:19.580666 ntpd[1928]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 23:33:19.587269 extend-filesystems[1925]: Found /dev/nvme0n1p9 Jan 23 23:33:19.580686 ntpd[1928]: ---------------------------------------------------- Jan 23 23:33:19.604394 extend-filesystems[1925]: Checking size of /dev/nvme0n1p9 Jan 23 23:33:19.612984 ntpd[1928]: 23 Jan 23:33:19 ntpd[1928]: proto: precision = 0.108 usec (-23) Jan 23 23:33:19.612984 ntpd[1928]: 23 Jan 23:33:19 ntpd[1928]: basedate set to 2026-01-11 Jan 23 23:33:19.612984 ntpd[1928]: 23 Jan 23:33:19 ntpd[1928]: gps base set to 2026-01-11 (week 2401) Jan 23 23:33:19.612984 ntpd[1928]: 23 Jan 23:33:19 ntpd[1928]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 23:33:19.612984 ntpd[1928]: 23 Jan 23:33:19 ntpd[1928]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 23:33:19.612984 ntpd[1928]: 23 Jan 23:33:19 ntpd[1928]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 23:33:19.612984 ntpd[1928]: 23 Jan 23:33:19 ntpd[1928]: Listen normally on 3 eth0 172.31.23.100:123 Jan 23 23:33:19.612984 ntpd[1928]: 23 Jan 23:33:19 ntpd[1928]: Listen normally on 4 lo [::1]:123 Jan 23 23:33:19.612984 ntpd[1928]: 23 Jan 23:33:19 ntpd[1928]: Listen normally on 5 eth0 [fe80::456:ffff:fe48:217f%2]:123 Jan 23 23:33:19.612984 ntpd[1928]: 23 Jan 23:33:19 ntpd[1928]: Listening on routing socket on fd #22 for interface updates Jan 23 23:33:19.580704 ntpd[1928]: ntp-4 is maintained by Network Time Foundation, Jan 23 23:33:19.580837 ntpd[1928]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 23:33:19.580855 ntpd[1928]: corporation. Support and training for ntp-4 are Jan 23 23:33:19.580872 ntpd[1928]: available at https://www.nwtime.org/support Jan 23 23:33:19.580889 ntpd[1928]: ---------------------------------------------------- Jan 23 23:33:19.589160 ntpd[1928]: proto: precision = 0.108 usec (-23) Jan 23 23:33:19.590196 ntpd[1928]: basedate set to 2026-01-11 Jan 23 23:33:19.590227 ntpd[1928]: gps base set to 2026-01-11 (week 2401) Jan 23 23:33:19.590426 ntpd[1928]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 23:33:19.590476 ntpd[1928]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 23:33:19.590826 ntpd[1928]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 23:33:19.590878 ntpd[1928]: Listen normally on 3 eth0 172.31.23.100:123 Jan 23 23:33:19.590927 ntpd[1928]: Listen normally on 4 lo [::1]:123 Jan 23 23:33:19.590972 ntpd[1928]: Listen normally on 5 eth0 [fe80::456:ffff:fe48:217f%2]:123 Jan 23 23:33:19.591013 ntpd[1928]: Listening on routing socket on fd #22 for interface updates Jan 23 23:33:19.623234 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 23:33:19.641802 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 23:33:19.648202 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 23 23:33:19.662144 ntpd[1928]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:33:19.663931 ntpd[1928]: 23 Jan 23:33:19 ntpd[1928]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:33:19.663931 ntpd[1928]: 23 Jan 23:33:19 ntpd[1928]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:33:19.662200 ntpd[1928]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:33:19.687482 jq[1970]: true Jan 23 23:33:19.703869 coreos-metadata[1921]: Jan 23 23:33:19.696 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 23:33:19.703869 coreos-metadata[1921]: Jan 23 23:33:19.698 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 23 23:33:19.703869 coreos-metadata[1921]: Jan 23 23:33:19.703 INFO Fetch successful Jan 23 23:33:19.703869 coreos-metadata[1921]: Jan 23 23:33:19.703 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 23 23:33:19.702920 dbus-daemon[1922]: [system] SELinux support is enabled Jan 23 23:33:19.712756 coreos-metadata[1921]: Jan 23 23:33:19.705 INFO Fetch successful Jan 23 23:33:19.712756 coreos-metadata[1921]: Jan 23 23:33:19.705 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 23 23:33:19.712960 extend-filesystems[1925]: Resized partition /dev/nvme0n1p9 Jan 23 23:33:19.706948 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 23:33:19.726775 coreos-metadata[1921]: Jan 23 23:33:19.717 INFO Fetch successful Jan 23 23:33:19.726775 coreos-metadata[1921]: Jan 23 23:33:19.717 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 23 23:33:19.726775 coreos-metadata[1921]: Jan 23 23:33:19.720 INFO Fetch successful Jan 23 23:33:19.726775 coreos-metadata[1921]: Jan 23 23:33:19.720 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 23 23:33:19.719764 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 23:33:19.719829 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 23:33:19.731085 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 23:33:19.735878 tar[1949]: linux-arm64/LICENSE Jan 23 23:33:19.735878 tar[1949]: linux-arm64/helm Jan 23 23:33:19.731130 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 23:33:19.736521 coreos-metadata[1921]: Jan 23 23:33:19.735 INFO Fetch failed with 404: resource not found Jan 23 23:33:19.736521 coreos-metadata[1921]: Jan 23 23:33:19.736 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 23 23:33:19.741024 extend-filesystems[2000]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 23:33:19.749924 coreos-metadata[1921]: Jan 23 23:33:19.740 INFO Fetch successful Jan 23 23:33:19.749924 coreos-metadata[1921]: Jan 23 23:33:19.741 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 23 23:33:19.749924 coreos-metadata[1921]: Jan 23 23:33:19.748 INFO Fetch successful Jan 23 23:33:19.749924 coreos-metadata[1921]: Jan 23 23:33:19.748 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 23 23:33:19.752650 coreos-metadata[1921]: Jan 23 23:33:19.750 INFO Fetch successful Jan 23 23:33:19.752650 coreos-metadata[1921]: Jan 23 23:33:19.750 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 23 23:33:19.754640 dbus-daemon[1922]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1781 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 23:33:19.760838 coreos-metadata[1921]: Jan 23 23:33:19.759 INFO Fetch successful Jan 23 23:33:19.760838 coreos-metadata[1921]: Jan 23 23:33:19.759 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 23 23:33:19.763940 coreos-metadata[1921]: Jan 23 23:33:19.762 INFO Fetch successful Jan 23 23:33:19.769794 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Jan 23 23:33:19.781570 dbus-daemon[1922]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 23:33:19.789008 update_engine[1935]: I20260123 23:33:19.786867 1935 main.cc:92] Flatcar Update Engine starting Jan 23 23:33:19.792328 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 23:33:19.805403 systemd[1]: Started update-engine.service - Update Engine. Jan 23 23:33:19.812398 update_engine[1935]: I20260123 23:33:19.805465 1935 update_check_scheduler.cc:74] Next update check in 11m56s Jan 23 23:33:19.827750 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Jan 23 23:33:19.855350 extend-filesystems[2000]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 23:33:19.855350 extend-filesystems[2000]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 23:33:19.855350 extend-filesystems[2000]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Jan 23 23:33:19.865155 extend-filesystems[1925]: Resized filesystem in /dev/nvme0n1p9 Jan 23 23:33:19.904398 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 23:33:19.908198 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 23:33:19.910239 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 23:33:20.024889 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 23:33:20.028084 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 23:33:20.046299 bash[2031]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:33:20.051514 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 23:33:20.060156 systemd[1]: Starting sshkeys.service... Jan 23 23:33:20.109941 amazon-ssm-agent[1993]: Initializing new seelog logger Jan 23 23:33:20.109941 amazon-ssm-agent[1993]: New Seelog Logger Creation Complete Jan 23 23:33:20.109941 amazon-ssm-agent[1993]: 2026/01/23 23:33:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:33:20.109941 amazon-ssm-agent[1993]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:33:20.121218 amazon-ssm-agent[1993]: 2026/01/23 23:33:20 processing appconfig overrides Jan 23 23:33:20.121218 amazon-ssm-agent[1993]: 2026/01/23 23:33:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:33:20.121218 amazon-ssm-agent[1993]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:33:20.121218 amazon-ssm-agent[1993]: 2026/01/23 23:33:20 processing appconfig overrides Jan 23 23:33:20.121218 amazon-ssm-agent[1993]: 2026/01/23 23:33:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:33:20.121218 amazon-ssm-agent[1993]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:33:20.121218 amazon-ssm-agent[1993]: 2026/01/23 23:33:20 processing appconfig overrides Jan 23 23:33:20.128643 amazon-ssm-agent[1993]: 2026-01-23 23:33:20.1179 INFO Proxy environment variables: Jan 23 23:33:20.134685 amazon-ssm-agent[1993]: 2026/01/23 23:33:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:33:20.134685 amazon-ssm-agent[1993]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:33:20.134895 amazon-ssm-agent[1993]: 2026/01/23 23:33:20 processing appconfig overrides Jan 23 23:33:20.178014 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 23:33:20.190427 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 23:33:20.197598 systemd-logind[1934]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 23:33:20.197641 systemd-logind[1934]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 23 23:33:20.201297 systemd-logind[1934]: New seat seat0. Jan 23 23:33:20.210503 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 23:33:20.229753 amazon-ssm-agent[1993]: 2026-01-23 23:33:20.1180 INFO https_proxy: Jan 23 23:33:20.262510 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 23:33:20.268464 dbus-daemon[1922]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 23:33:20.269602 dbus-daemon[1922]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2003 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 23:33:20.291802 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 23:33:20.337855 amazon-ssm-agent[1993]: 2026-01-23 23:33:20.1180 INFO http_proxy: Jan 23 23:33:20.448178 amazon-ssm-agent[1993]: 2026-01-23 23:33:20.1180 INFO no_proxy: Jan 23 23:33:20.577747 amazon-ssm-agent[1993]: 2026-01-23 23:33:20.1181 INFO Checking if agent identity type OnPrem can be assumed Jan 23 23:33:20.657538 coreos-metadata[2057]: Jan 23 23:33:20.657 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 23:33:20.665934 coreos-metadata[2057]: Jan 23 23:33:20.665 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 23 23:33:20.665934 coreos-metadata[2057]: Jan 23 23:33:20.665 INFO Fetch successful Jan 23 23:33:20.665934 coreos-metadata[2057]: Jan 23 23:33:20.665 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 23:33:20.667044 coreos-metadata[2057]: Jan 23 23:33:20.666 INFO Fetch successful Jan 23 23:33:20.677113 amazon-ssm-agent[1993]: 2026-01-23 23:33:20.1182 INFO Checking if agent identity type EC2 can be assumed Jan 23 23:33:20.679329 unknown[2057]: wrote ssh authorized keys file for user: core Jan 23 23:33:20.784017 amazon-ssm-agent[1993]: 2026-01-23 23:33:20.6198 INFO Agent will take identity from EC2 Jan 23 23:33:20.805007 update-ssh-keys[2130]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:33:20.809818 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 23:33:20.824503 systemd[1]: Finished sshkeys.service. Jan 23 23:33:20.868951 containerd[1972]: time="2026-01-23T23:33:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 23:33:20.881755 amazon-ssm-agent[1993]: 2026-01-23 23:33:20.6252 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 23 23:33:20.884811 containerd[1972]: time="2026-01-23T23:33:20.884752751Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 23 23:33:20.984970 amazon-ssm-agent[1993]: 2026-01-23 23:33:20.6252 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 23 23:33:21.016740 containerd[1972]: time="2026-01-23T23:33:21.016480015Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.58µs" Jan 23 23:33:21.016740 containerd[1972]: time="2026-01-23T23:33:21.016538203Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 23:33:21.016740 containerd[1972]: time="2026-01-23T23:33:21.016606075Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 23:33:21.016740 containerd[1972]: time="2026-01-23T23:33:21.016642615Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 23:33:21.019958 containerd[1972]: time="2026-01-23T23:33:21.017453143Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 23:33:21.019958 containerd[1972]: time="2026-01-23T23:33:21.017500063Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 23:33:21.019958 containerd[1972]: time="2026-01-23T23:33:21.017619643Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 23:33:21.019958 containerd[1972]: time="2026-01-23T23:33:21.017647339Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 23:33:21.019958 containerd[1972]: time="2026-01-23T23:33:21.018147727Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 23:33:21.019958 containerd[1972]: time="2026-01-23T23:33:21.018181723Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 23:33:21.019958 containerd[1972]: time="2026-01-23T23:33:21.018209743Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 23:33:21.019958 containerd[1972]: time="2026-01-23T23:33:21.018231847Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 23:33:21.019958 containerd[1972]: time="2026-01-23T23:33:21.018544867Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 23:33:21.019958 containerd[1972]: time="2026-01-23T23:33:21.018573031Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 23:33:21.023476 containerd[1972]: time="2026-01-23T23:33:21.022795519Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 23:33:21.023476 containerd[1972]: time="2026-01-23T23:33:21.023329519Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 23:33:21.023476 containerd[1972]: time="2026-01-23T23:33:21.023391763Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 23:33:21.023476 containerd[1972]: time="2026-01-23T23:33:21.023417443Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 23:33:21.025476 containerd[1972]: time="2026-01-23T23:33:21.025003711Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 23:33:21.027609 containerd[1972]: time="2026-01-23T23:33:21.027383455Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 23:33:21.027609 containerd[1972]: time="2026-01-23T23:33:21.027568243Z" level=info msg="metadata content store policy set" policy=shared Jan 23 23:33:21.038614 containerd[1972]: time="2026-01-23T23:33:21.038503627Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 23:33:21.038872 containerd[1972]: time="2026-01-23T23:33:21.038842111Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 23:33:21.039158 containerd[1972]: time="2026-01-23T23:33:21.039126487Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 23:33:21.042810 containerd[1972]: time="2026-01-23T23:33:21.041147263Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 23:33:21.042810 containerd[1972]: time="2026-01-23T23:33:21.041230315Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 23:33:21.042810 containerd[1972]: time="2026-01-23T23:33:21.041291035Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 23:33:21.042810 containerd[1972]: time="2026-01-23T23:33:21.041325871Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 23:33:21.042810 containerd[1972]: time="2026-01-23T23:33:21.041352139Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 23:33:21.042810 containerd[1972]: time="2026-01-23T23:33:21.041383171Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 23:33:21.042810 containerd[1972]: time="2026-01-23T23:33:21.041415979Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 23:33:21.042810 containerd[1972]: time="2026-01-23T23:33:21.041446891Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 23:33:21.042810 containerd[1972]: time="2026-01-23T23:33:21.041474275Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 23:33:21.042810 containerd[1972]: time="2026-01-23T23:33:21.041501143Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 23:33:21.042810 containerd[1972]: time="2026-01-23T23:33:21.041537179Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 23:33:21.042810 containerd[1972]: time="2026-01-23T23:33:21.042374023Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 23:33:21.042810 containerd[1972]: time="2026-01-23T23:33:21.042448843Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 23:33:21.042810 containerd[1972]: time="2026-01-23T23:33:21.042486715Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 23:33:21.043902 containerd[1972]: time="2026-01-23T23:33:21.042513847Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 23:33:21.043902 containerd[1972]: time="2026-01-23T23:33:21.042541999Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 23:33:21.043902 containerd[1972]: time="2026-01-23T23:33:21.042570103Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 23:33:21.043902 containerd[1972]: time="2026-01-23T23:33:21.042615115Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 23:33:21.043902 containerd[1972]: time="2026-01-23T23:33:21.042650335Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 23:33:21.043902 containerd[1972]: time="2026-01-23T23:33:21.042679435Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 23:33:21.045750 containerd[1972]: time="2026-01-23T23:33:21.042706399Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 23:33:21.045750 containerd[1972]: time="2026-01-23T23:33:21.044636287Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 23:33:21.045750 containerd[1972]: time="2026-01-23T23:33:21.044697175Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 23:33:21.045750 containerd[1972]: time="2026-01-23T23:33:21.044821423Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 23:33:21.045750 containerd[1972]: time="2026-01-23T23:33:21.044853499Z" level=info msg="Start snapshots syncer" Jan 23 23:33:21.045750 containerd[1972]: time="2026-01-23T23:33:21.044884219Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 23:33:21.046088 containerd[1972]: time="2026-01-23T23:33:21.045365707Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 23:33:21.046088 containerd[1972]: time="2026-01-23T23:33:21.045463495Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 23:33:21.046299 containerd[1972]: time="2026-01-23T23:33:21.045545731Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 23:33:21.047775 containerd[1972]: time="2026-01-23T23:33:21.047319319Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 23:33:21.048116 containerd[1972]: time="2026-01-23T23:33:21.047997451Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 23:33:21.048350 containerd[1972]: time="2026-01-23T23:33:21.048257755Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 23:33:21.048432 containerd[1972]: time="2026-01-23T23:33:21.048347515Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 23:33:21.048432 containerd[1972]: time="2026-01-23T23:33:21.048411799Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 23:33:21.048573 containerd[1972]: time="2026-01-23T23:33:21.048454603Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 23:33:21.048573 containerd[1972]: time="2026-01-23T23:33:21.048524623Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 23:33:21.048664 containerd[1972]: time="2026-01-23T23:33:21.048557395Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 23:33:21.048664 containerd[1972]: time="2026-01-23T23:33:21.048629887Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 23:33:21.049384 containerd[1972]: time="2026-01-23T23:33:21.048803995Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 23:33:21.051840 containerd[1972]: time="2026-01-23T23:33:21.048894715Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 23:33:21.051989 containerd[1972]: time="2026-01-23T23:33:21.051854059Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 23:33:21.051989 containerd[1972]: time="2026-01-23T23:33:21.051946339Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 23:33:21.051989 containerd[1972]: time="2026-01-23T23:33:21.051974407Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 23:33:21.052120 containerd[1972]: time="2026-01-23T23:33:21.052042303Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 23:33:21.052120 containerd[1972]: time="2026-01-23T23:33:21.052077247Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 23:33:21.053866 containerd[1972]: time="2026-01-23T23:33:21.053778511Z" level=info msg="runtime interface created" Jan 23 23:33:21.053866 containerd[1972]: time="2026-01-23T23:33:21.053858983Z" level=info msg="created NRI interface" Jan 23 23:33:21.057279 containerd[1972]: time="2026-01-23T23:33:21.053902027Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 23:33:21.057279 containerd[1972]: time="2026-01-23T23:33:21.055802287Z" level=info msg="Connect containerd service" Jan 23 23:33:21.061368 containerd[1972]: time="2026-01-23T23:33:21.061298527Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 23:33:21.074213 containerd[1972]: time="2026-01-23T23:33:21.073173055Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:33:21.076690 locksmithd[2006]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 23:33:21.085870 amazon-ssm-agent[1993]: 2026-01-23 23:33:20.6252 INFO [amazon-ssm-agent] Starting Core Agent Jan 23 23:33:21.189764 amazon-ssm-agent[1993]: 2026-01-23 23:33:20.6252 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 23 23:33:21.202874 polkitd[2071]: Started polkitd version 126 Jan 23 23:33:21.276000 polkitd[2071]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 23:33:21.276621 polkitd[2071]: Loading rules from directory /run/polkit-1/rules.d Jan 23 23:33:21.282744 polkitd[2071]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 23:33:21.283495 polkitd[2071]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 23:33:21.283565 polkitd[2071]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 23:33:21.283657 polkitd[2071]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 23:33:21.289642 polkitd[2071]: Finished loading, compiling and executing 2 rules Jan 23 23:33:21.290250 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 23:33:21.295683 amazon-ssm-agent[1993]: 2026-01-23 23:33:20.6252 INFO [Registrar] Starting registrar module Jan 23 23:33:21.300683 dbus-daemon[1922]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 23:33:21.305797 polkitd[2071]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 23:33:21.366373 systemd-resolved[1535]: System hostname changed to 'ip-172-31-23-100'. Jan 23 23:33:21.366535 systemd-hostnamed[2003]: Hostname set to (transient) Jan 23 23:33:21.389452 amazon-ssm-agent[1993]: 2026-01-23 23:33:20.6393 INFO [EC2Identity] Checking disk for registration info Jan 23 23:33:21.467357 containerd[1972]: time="2026-01-23T23:33:21.467269833Z" level=info msg="Start subscribing containerd event" Jan 23 23:33:21.467483 containerd[1972]: time="2026-01-23T23:33:21.467382561Z" level=info msg="Start recovering state" Jan 23 23:33:21.467756 containerd[1972]: time="2026-01-23T23:33:21.467554173Z" level=info msg="Start event monitor" Jan 23 23:33:21.467756 containerd[1972]: time="2026-01-23T23:33:21.467591889Z" level=info msg="Start cni network conf syncer for default" Jan 23 23:33:21.467756 containerd[1972]: time="2026-01-23T23:33:21.467625021Z" level=info msg="Start streaming server" Jan 23 23:33:21.467756 containerd[1972]: time="2026-01-23T23:33:21.467650029Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 23:33:21.467756 containerd[1972]: time="2026-01-23T23:33:21.467669529Z" level=info msg="runtime interface starting up..." Jan 23 23:33:21.467756 containerd[1972]: time="2026-01-23T23:33:21.467694561Z" level=info msg="starting plugins..." Jan 23 23:33:21.468061 containerd[1972]: time="2026-01-23T23:33:21.467759349Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 23:33:21.468508 containerd[1972]: time="2026-01-23T23:33:21.468440013Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 23:33:21.468830 containerd[1972]: time="2026-01-23T23:33:21.468578109Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 23:33:21.469956 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 23:33:21.479560 containerd[1972]: time="2026-01-23T23:33:21.473284713Z" level=info msg="containerd successfully booted in 0.618618s" Jan 23 23:33:21.491273 amazon-ssm-agent[1993]: 2026-01-23 23:33:20.6394 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 23 23:33:21.590796 amazon-ssm-agent[1993]: 2026-01-23 23:33:20.6394 INFO [EC2Identity] Generating registration keypair Jan 23 23:33:21.706792 amazon-ssm-agent[1993]: 2026-01-23 23:33:21.7064 INFO [EC2Identity] Checking write access before registering Jan 23 23:33:21.753093 amazon-ssm-agent[1993]: 2026/01/23 23:33:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:33:21.753093 amazon-ssm-agent[1993]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:33:21.755848 amazon-ssm-agent[1993]: 2026/01/23 23:33:21 processing appconfig overrides Jan 23 23:33:21.761255 tar[1949]: linux-arm64/README.md Jan 23 23:33:21.779224 sshd_keygen[1969]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 23:33:21.797762 amazon-ssm-agent[1993]: 2026-01-23 23:33:21.7074 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 23 23:33:21.797990 amazon-ssm-agent[1993]: 2026-01-23 23:33:21.7526 INFO [EC2Identity] EC2 registration was successful. Jan 23 23:33:21.798021 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 23:33:21.801139 amazon-ssm-agent[1993]: 2026-01-23 23:33:21.7528 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 23 23:33:21.801139 amazon-ssm-agent[1993]: 2026-01-23 23:33:21.7529 INFO [CredentialRefresher] credentialRefresher has started Jan 23 23:33:21.801139 amazon-ssm-agent[1993]: 2026-01-23 23:33:21.7529 INFO [CredentialRefresher] Starting credentials refresher loop Jan 23 23:33:21.801139 amazon-ssm-agent[1993]: 2026-01-23 23:33:21.7973 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 23 23:33:21.801139 amazon-ssm-agent[1993]: 2026-01-23 23:33:21.7976 INFO [CredentialRefresher] Credentials ready Jan 23 23:33:21.807383 amazon-ssm-agent[1993]: 2026-01-23 23:33:21.8009 INFO [CredentialRefresher] Next credential rotation will be in 29.9999401893 minutes Jan 23 23:33:21.832081 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 23:33:21.837961 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 23:33:21.868047 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 23:33:21.869558 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 23:33:21.877311 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 23:33:21.911511 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 23:33:21.918120 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 23:33:21.927290 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 23:33:21.933544 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 23:33:22.830555 amazon-ssm-agent[1993]: 2026-01-23 23:33:22.8304 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 23 23:33:22.931999 amazon-ssm-agent[1993]: 2026-01-23 23:33:22.8367 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2202) started Jan 23 23:33:23.032561 amazon-ssm-agent[1993]: 2026-01-23 23:33:22.8368 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 23 23:33:24.268363 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:33:24.272078 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 23:33:24.276920 systemd[1]: Startup finished in 4.200s (kernel) + 12.323s (initrd) + 15.153s (userspace) = 31.678s. Jan 23 23:33:24.300276 (kubelet)[2217]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:33:26.893784 systemd-resolved[1535]: Clock change detected. Flushing caches. Jan 23 23:33:26.937902 kubelet[2217]: E0123 23:33:26.937823 2217 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:33:26.942430 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:33:26.943184 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:33:26.944162 systemd[1]: kubelet.service: Consumed 1.476s CPU time, 260.9M memory peak. Jan 23 23:33:27.626712 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 23:33:27.629284 systemd[1]: Started sshd@0-172.31.23.100:22-20.161.92.111:46984.service - OpenSSH per-connection server daemon (20.161.92.111:46984). Jan 23 23:33:28.306996 sshd[2230]: Accepted publickey for core from 20.161.92.111 port 46984 ssh2: RSA SHA256:ypdhUqoTY06pBP3UvLlAm2LjmautYebA9jgvJDmvCzY Jan 23 23:33:28.310394 sshd-session[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:33:28.323368 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 23:33:28.325747 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 23:33:28.339838 systemd-logind[1934]: New session 1 of user core. Jan 23 23:33:28.363774 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 23:33:28.370451 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 23:33:28.395612 (systemd)[2236]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:33:28.401343 systemd-logind[1934]: New session 2 of user core. Jan 23 23:33:28.693171 systemd[2236]: Queued start job for default target default.target. Jan 23 23:33:28.705967 systemd[2236]: Created slice app.slice - User Application Slice. Jan 23 23:33:28.706050 systemd[2236]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 23 23:33:28.706084 systemd[2236]: Reached target paths.target - Paths. Jan 23 23:33:28.706187 systemd[2236]: Reached target timers.target - Timers. Jan 23 23:33:28.708621 systemd[2236]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 23:33:28.712258 systemd[2236]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 23 23:33:28.738754 systemd[2236]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 23:33:28.738989 systemd[2236]: Reached target sockets.target - Sockets. Jan 23 23:33:28.743132 systemd[2236]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 23 23:33:28.743764 systemd[2236]: Reached target basic.target - Basic System. Jan 23 23:33:28.743930 systemd[2236]: Reached target default.target - Main User Target. Jan 23 23:33:28.743999 systemd[2236]: Startup finished in 331ms. Jan 23 23:33:28.744474 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 23:33:28.753233 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 23:33:29.020609 systemd[1]: Started sshd@1-172.31.23.100:22-20.161.92.111:46994.service - OpenSSH per-connection server daemon (20.161.92.111:46994). Jan 23 23:33:29.521653 sshd[2250]: Accepted publickey for core from 20.161.92.111 port 46994 ssh2: RSA SHA256:ypdhUqoTY06pBP3UvLlAm2LjmautYebA9jgvJDmvCzY Jan 23 23:33:29.524151 sshd-session[2250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:33:29.533993 systemd-logind[1934]: New session 3 of user core. Jan 23 23:33:29.552188 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 23:33:29.776279 sshd[2254]: Connection closed by 20.161.92.111 port 46994 Jan 23 23:33:29.777332 sshd-session[2250]: pam_unix(sshd:session): session closed for user core Jan 23 23:33:29.784092 systemd[1]: sshd@1-172.31.23.100:22-20.161.92.111:46994.service: Deactivated successfully. Jan 23 23:33:29.787357 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 23:33:29.792605 systemd-logind[1934]: Session 3 logged out. Waiting for processes to exit. Jan 23 23:33:29.794322 systemd-logind[1934]: Removed session 3. Jan 23 23:33:29.863388 systemd[1]: Started sshd@2-172.31.23.100:22-20.161.92.111:47008.service - OpenSSH per-connection server daemon (20.161.92.111:47008). Jan 23 23:33:30.336709 sshd[2260]: Accepted publickey for core from 20.161.92.111 port 47008 ssh2: RSA SHA256:ypdhUqoTY06pBP3UvLlAm2LjmautYebA9jgvJDmvCzY Jan 23 23:33:30.339394 sshd-session[2260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:33:30.351020 systemd-logind[1934]: New session 4 of user core. Jan 23 23:33:30.356239 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 23:33:30.571447 sshd[2264]: Connection closed by 20.161.92.111 port 47008 Jan 23 23:33:30.572283 sshd-session[2260]: pam_unix(sshd:session): session closed for user core Jan 23 23:33:30.581386 systemd[1]: sshd@2-172.31.23.100:22-20.161.92.111:47008.service: Deactivated successfully. Jan 23 23:33:30.584988 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 23:33:30.587210 systemd-logind[1934]: Session 4 logged out. Waiting for processes to exit. Jan 23 23:33:30.590426 systemd-logind[1934]: Removed session 4. Jan 23 23:33:30.680664 systemd[1]: Started sshd@3-172.31.23.100:22-20.161.92.111:47018.service - OpenSSH per-connection server daemon (20.161.92.111:47018). Jan 23 23:33:31.186966 sshd[2270]: Accepted publickey for core from 20.161.92.111 port 47018 ssh2: RSA SHA256:ypdhUqoTY06pBP3UvLlAm2LjmautYebA9jgvJDmvCzY Jan 23 23:33:31.189221 sshd-session[2270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:33:31.198023 systemd-logind[1934]: New session 5 of user core. Jan 23 23:33:31.207204 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 23:33:31.448890 sshd[2274]: Connection closed by 20.161.92.111 port 47018 Jan 23 23:33:31.449681 sshd-session[2270]: pam_unix(sshd:session): session closed for user core Jan 23 23:33:31.457377 systemd[1]: sshd@3-172.31.23.100:22-20.161.92.111:47018.service: Deactivated successfully. Jan 23 23:33:31.461592 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 23:33:31.466819 systemd-logind[1934]: Session 5 logged out. Waiting for processes to exit. Jan 23 23:33:31.468755 systemd-logind[1934]: Removed session 5. Jan 23 23:33:31.538250 systemd[1]: Started sshd@4-172.31.23.100:22-20.161.92.111:47034.service - OpenSSH per-connection server daemon (20.161.92.111:47034). Jan 23 23:33:32.012972 sshd[2280]: Accepted publickey for core from 20.161.92.111 port 47034 ssh2: RSA SHA256:ypdhUqoTY06pBP3UvLlAm2LjmautYebA9jgvJDmvCzY Jan 23 23:33:32.015277 sshd-session[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:33:32.024490 systemd-logind[1934]: New session 6 of user core. Jan 23 23:33:32.032216 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 23:33:32.258503 sudo[2285]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 23:33:32.259184 sudo[2285]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:33:32.271335 sudo[2285]: pam_unix(sudo:session): session closed for user root Jan 23 23:33:32.350977 sshd[2284]: Connection closed by 20.161.92.111 port 47034 Jan 23 23:33:32.350586 sshd-session[2280]: pam_unix(sshd:session): session closed for user core Jan 23 23:33:32.360094 systemd[1]: sshd@4-172.31.23.100:22-20.161.92.111:47034.service: Deactivated successfully. Jan 23 23:33:32.363286 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 23:33:32.365325 systemd-logind[1934]: Session 6 logged out. Waiting for processes to exit. Jan 23 23:33:32.368517 systemd-logind[1934]: Removed session 6. Jan 23 23:33:32.448203 systemd[1]: Started sshd@5-172.31.23.100:22-20.161.92.111:55564.service - OpenSSH per-connection server daemon (20.161.92.111:55564). Jan 23 23:33:32.912993 sshd[2292]: Accepted publickey for core from 20.161.92.111 port 55564 ssh2: RSA SHA256:ypdhUqoTY06pBP3UvLlAm2LjmautYebA9jgvJDmvCzY Jan 23 23:33:32.914944 sshd-session[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:33:32.923999 systemd-logind[1934]: New session 7 of user core. Jan 23 23:33:32.931200 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 23:33:33.077403 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 23:33:33.078577 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:33:33.083519 sudo[2298]: pam_unix(sudo:session): session closed for user root Jan 23 23:33:33.095787 sudo[2297]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 23:33:33.096536 sudo[2297]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:33:33.111696 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 23:33:33.177000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 23 23:33:33.179262 augenrules[2322]: No rules Jan 23 23:33:33.179620 kernel: kauditd_printk_skb: 136 callbacks suppressed Jan 23 23:33:33.179692 kernel: audit: type=1305 audit(1769211213.177:230): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 23 23:33:33.177000 audit[2322]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffeba87ab0 a2=420 a3=0 items=0 ppid=2303 pid=2322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:33.189545 kernel: audit: type=1300 audit(1769211213.177:230): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffeba87ab0 a2=420 a3=0 items=0 ppid=2303 pid=2322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:33.190188 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 23:33:33.190642 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 23:33:33.194834 kernel: audit: type=1327 audit(1769211213.177:230): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 23:33:33.177000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 23:33:33.195180 sudo[2297]: pam_unix(sudo:session): session closed for user root Jan 23 23:33:33.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:33.201277 kernel: audit: type=1130 audit(1769211213.190:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:33.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:33.205924 kernel: audit: type=1131 audit(1769211213.190:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:33.194000 audit[2297]: USER_END pid=2297 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 23:33:33.210933 kernel: audit: type=1106 audit(1769211213.194:233): pid=2297 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 23:33:33.194000 audit[2297]: CRED_DISP pid=2297 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 23:33:33.215501 kernel: audit: type=1104 audit(1769211213.194:234): pid=2297 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 23:33:33.273046 sshd[2296]: Connection closed by 20.161.92.111 port 55564 Jan 23 23:33:33.272237 sshd-session[2292]: pam_unix(sshd:session): session closed for user core Jan 23 23:33:33.273000 audit[2292]: USER_END pid=2292 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:33:33.279816 systemd[1]: sshd@5-172.31.23.100:22-20.161.92.111:55564.service: Deactivated successfully. Jan 23 23:33:33.288169 kernel: audit: type=1106 audit(1769211213.273:235): pid=2292 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:33:33.288309 kernel: audit: type=1104 audit(1769211213.273:236): pid=2292 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:33:33.273000 audit[2292]: CRED_DISP pid=2292 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:33:33.283864 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 23:33:33.288016 systemd-logind[1934]: Session 7 logged out. Waiting for processes to exit. Jan 23 23:33:33.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.23.100:22-20.161.92.111:55564 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:33.294813 kernel: audit: type=1131 audit(1769211213.279:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.23.100:22-20.161.92.111:55564 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:33.293230 systemd-logind[1934]: Removed session 7. Jan 23 23:33:33.367809 systemd[1]: Started sshd@6-172.31.23.100:22-20.161.92.111:55572.service - OpenSSH per-connection server daemon (20.161.92.111:55572). Jan 23 23:33:33.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.23.100:22-20.161.92.111:55572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:33.821000 audit[2331]: USER_ACCT pid=2331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:33:33.822452 sshd[2331]: Accepted publickey for core from 20.161.92.111 port 55572 ssh2: RSA SHA256:ypdhUqoTY06pBP3UvLlAm2LjmautYebA9jgvJDmvCzY Jan 23 23:33:33.822000 audit[2331]: CRED_ACQ pid=2331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:33:33.823000 audit[2331]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffed0491a0 a2=3 a3=0 items=0 ppid=1 pid=2331 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:33.823000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:33:33.824904 sshd-session[2331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:33:33.834045 systemd-logind[1934]: New session 8 of user core. Jan 23 23:33:33.847219 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 23:33:33.851000 audit[2331]: USER_START pid=2331 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:33:33.855000 audit[2335]: CRED_ACQ pid=2335 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:33:33.986000 audit[2336]: USER_ACCT pid=2336 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 23:33:33.987799 sudo[2336]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 23:33:33.988494 sudo[2336]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:33:33.987000 audit[2336]: CRED_REFR pid=2336 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 23:33:33.987000 audit[2336]: USER_START pid=2336 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 23:33:35.215425 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 23:33:35.235385 (dockerd)[2356]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 23:33:36.313028 dockerd[2356]: time="2026-01-23T23:33:36.312942930Z" level=info msg="Starting up" Jan 23 23:33:36.317673 dockerd[2356]: time="2026-01-23T23:33:36.316813326Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 23:33:36.336966 dockerd[2356]: time="2026-01-23T23:33:36.336861078Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 23:33:36.414461 dockerd[2356]: time="2026-01-23T23:33:36.414254695Z" level=info msg="Loading containers: start." Jan 23 23:33:36.430962 kernel: Initializing XFRM netlink socket Jan 23 23:33:36.581000 audit[2405]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2405 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:33:36.581000 audit[2405]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffea0f3700 a2=0 a3=0 items=0 ppid=2356 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.581000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 23 23:33:36.586000 audit[2407]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2407 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:33:36.586000 audit[2407]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffca1c4190 a2=0 a3=0 items=0 ppid=2356 pid=2407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.586000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 23 23:33:36.590000 audit[2409]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2409 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:33:36.590000 audit[2409]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcf8e40a0 a2=0 a3=0 items=0 ppid=2356 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.590000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 23 23:33:36.594000 audit[2411]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2411 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:33:36.594000 audit[2411]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdfc2d280 a2=0 a3=0 items=0 ppid=2356 pid=2411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.594000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 23 23:33:36.598000 audit[2413]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2413 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:33:36.598000 audit[2413]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc80584f0 a2=0 a3=0 items=0 ppid=2356 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.598000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 23 23:33:36.602000 audit[2415]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2415 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:33:36.602000 audit[2415]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffef53e950 a2=0 a3=0 items=0 ppid=2356 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.602000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 23:33:36.606000 audit[2417]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2417 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:33:36.606000 audit[2417]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffdf110600 a2=0 a3=0 items=0 ppid=2356 pid=2417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.606000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 23:33:36.611000 audit[2419]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2419 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:33:36.611000 audit[2419]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffd6116d60 a2=0 a3=0 items=0 ppid=2356 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.611000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 23 23:33:36.650000 audit[2422]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2422 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:33:36.650000 audit[2422]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffc711cf90 a2=0 a3=0 items=0 ppid=2356 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.650000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 23 23:33:36.654000 audit[2424]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2424 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:33:36.654000 audit[2424]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc3d58ab0 a2=0 a3=0 items=0 ppid=2356 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.654000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 23 23:33:36.658000 audit[2426]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2426 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:33:36.658000 audit[2426]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffce5e5b20 a2=0 a3=0 items=0 ppid=2356 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.658000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 23 23:33:36.662000 audit[2428]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:33:36.662000 audit[2428]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc8eb4470 a2=0 a3=0 items=0 ppid=2356 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.662000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 23:33:36.667000 audit[2430]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2430 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:33:36.667000 audit[2430]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffdb85b700 a2=0 a3=0 items=0 ppid=2356 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.667000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 23 23:33:36.804000 audit[2460]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:33:36.804000 audit[2460]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffffa17b260 a2=0 a3=0 items=0 ppid=2356 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.804000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 23 23:33:36.808000 audit[2462]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2462 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:33:36.808000 audit[2462]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe62dd490 a2=0 a3=0 items=0 ppid=2356 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.808000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 23 23:33:36.812000 audit[2464]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:33:36.812000 audit[2464]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeefadf30 a2=0 a3=0 items=0 ppid=2356 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.812000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 23 23:33:36.817000 audit[2466]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2466 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:33:36.817000 audit[2466]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc5bf3760 a2=0 a3=0 items=0 ppid=2356 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.817000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 23 23:33:36.821000 audit[2468]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2468 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:33:36.821000 audit[2468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffed832b40 a2=0 a3=0 items=0 ppid=2356 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.821000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 23 23:33:36.825000 audit[2470]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2470 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:33:36.825000 audit[2470]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd4943020 a2=0 a3=0 items=0 ppid=2356 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.825000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 23:33:36.830000 audit[2472]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:33:36.830000 audit[2472]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff33b16a0 a2=0 a3=0 items=0 ppid=2356 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.830000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 23:33:36.834000 audit[2474]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2474 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:33:36.834000 audit[2474]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffd7279f00 a2=0 a3=0 items=0 ppid=2356 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.834000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 23 23:33:36.840000 audit[2476]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2476 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:33:36.840000 audit[2476]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=fffff9508f80 a2=0 a3=0 items=0 ppid=2356 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.840000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 23 23:33:36.845000 audit[2478]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:33:36.845000 audit[2478]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe0d423d0 a2=0 a3=0 items=0 ppid=2356 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.845000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 23 23:33:36.849000 audit[2480]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2480 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:33:36.849000 audit[2480]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=fffff9514050 a2=0 a3=0 items=0 ppid=2356 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.849000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 23 23:33:36.853000 audit[2482]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2482 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:33:36.853000 audit[2482]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffd01876b0 a2=0 a3=0 items=0 ppid=2356 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.853000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 23:33:36.858000 audit[2484]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2484 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:33:36.858000 audit[2484]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffffa9b22b0 a2=0 a3=0 items=0 ppid=2356 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.858000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 23 23:33:36.870000 audit[2489]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:33:36.870000 audit[2489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffda45080 a2=0 a3=0 items=0 ppid=2356 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.870000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 23 23:33:36.874000 audit[2491]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2491 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:33:36.874000 audit[2491]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffcf0d6d30 a2=0 a3=0 items=0 ppid=2356 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.874000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 23 23:33:36.878000 audit[2493]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:33:36.878000 audit[2493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe402c160 a2=0 a3=0 items=0 ppid=2356 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.878000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 23 23:33:36.882000 audit[2495]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:33:36.882000 audit[2495]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffce2be1a0 a2=0 a3=0 items=0 ppid=2356 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.882000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 23 23:33:36.887000 audit[2497]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2497 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:33:36.887000 audit[2497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc186a930 a2=0 a3=0 items=0 ppid=2356 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.887000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 23 23:33:36.891000 audit[2499]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2499 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:33:36.891000 audit[2499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffcd57f680 a2=0 a3=0 items=0 ppid=2356 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.891000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 23 23:33:36.913199 (udev-worker)[2378]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:33:36.995000 audit[2503]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2503 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:33:36.995000 audit[2503]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffea2a7ad0 a2=0 a3=0 items=0 ppid=2356 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:36.995000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 23 23:33:37.000000 audit[2505]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:33:37.000000 audit[2505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffc74a7420 a2=0 a3=0 items=0 ppid=2356 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:37.000000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 23 23:33:37.017000 audit[2513]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2513 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:33:37.017000 audit[2513]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffc682cae0 a2=0 a3=0 items=0 ppid=2356 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:37.017000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 23 23:33:37.036000 audit[2519]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2519 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:33:37.036000 audit[2519]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff9e6da00 a2=0 a3=0 items=0 ppid=2356 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:37.036000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 23 23:33:37.041000 audit[2521]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:33:37.041000 audit[2521]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=fffff8359e20 a2=0 a3=0 items=0 ppid=2356 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:37.041000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 23 23:33:37.045000 audit[2523]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:33:37.045000 audit[2523]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc1922200 a2=0 a3=0 items=0 ppid=2356 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:37.045000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 23 23:33:37.049000 audit[2525]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:33:37.049000 audit[2525]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffd5360550 a2=0 a3=0 items=0 ppid=2356 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:37.049000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 23:33:37.054000 audit[2527]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:33:37.054000 audit[2527]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffce68e540 a2=0 a3=0 items=0 ppid=2356 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:33:37.054000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 23 23:33:37.056729 systemd-networkd[1781]: docker0: Link UP Jan 23 23:33:37.067535 dockerd[2356]: time="2026-01-23T23:33:37.067463298Z" level=info msg="Loading containers: done." Jan 23 23:33:37.093034 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2298842514-merged.mount: Deactivated successfully. Jan 23 23:33:37.097053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 23:33:37.101275 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:33:37.128093 dockerd[2356]: time="2026-01-23T23:33:37.128022846Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 23:33:37.128309 dockerd[2356]: time="2026-01-23T23:33:37.128154102Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 23:33:37.128488 dockerd[2356]: time="2026-01-23T23:33:37.128442918Z" level=info msg="Initializing buildkit" Jan 23 23:33:37.186157 dockerd[2356]: time="2026-01-23T23:33:37.186085927Z" level=info msg="Completed buildkit initialization" Jan 23 23:33:37.207570 dockerd[2356]: time="2026-01-23T23:33:37.207171955Z" level=info msg="Daemon has completed initialization" Jan 23 23:33:37.207676 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 23:33:37.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:37.208796 dockerd[2356]: time="2026-01-23T23:33:37.208526311Z" level=info msg="API listen on /run/docker.sock" Jan 23 23:33:37.535335 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:33:37.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:37.546405 (kubelet)[2574]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:33:37.619953 kubelet[2574]: E0123 23:33:37.619834 2574 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:33:37.631787 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:33:37.632130 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:33:37.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 23:33:37.633812 systemd[1]: kubelet.service: Consumed 322ms CPU time, 105.2M memory peak. Jan 23 23:33:38.513397 containerd[1972]: time="2026-01-23T23:33:38.513341073Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 23:33:39.383781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4182794003.mount: Deactivated successfully. Jan 23 23:33:40.619608 containerd[1972]: time="2026-01-23T23:33:40.618978036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:33:40.620898 containerd[1972]: time="2026-01-23T23:33:40.620807544Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=25791094" Jan 23 23:33:40.622954 containerd[1972]: time="2026-01-23T23:33:40.622324860Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:33:40.627329 containerd[1972]: time="2026-01-23T23:33:40.627244872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:33:40.629730 containerd[1972]: time="2026-01-23T23:33:40.629502576Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 2.116095503s" Jan 23 23:33:40.629730 containerd[1972]: time="2026-01-23T23:33:40.629556552Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Jan 23 23:33:40.632702 containerd[1972]: time="2026-01-23T23:33:40.632651388Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 23:33:42.051682 containerd[1972]: time="2026-01-23T23:33:42.051608303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:33:42.053941 containerd[1972]: time="2026-01-23T23:33:42.053829431Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23544927" Jan 23 23:33:42.055149 containerd[1972]: time="2026-01-23T23:33:42.055107239Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:33:42.062297 containerd[1972]: time="2026-01-23T23:33:42.062227307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:33:42.064273 containerd[1972]: time="2026-01-23T23:33:42.064226987Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.431518995s" Jan 23 23:33:42.064430 containerd[1972]: time="2026-01-23T23:33:42.064402787Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Jan 23 23:33:42.065164 containerd[1972]: time="2026-01-23T23:33:42.065025347Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 23:33:43.256948 containerd[1972]: time="2026-01-23T23:33:43.256774345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:33:43.259851 containerd[1972]: time="2026-01-23T23:33:43.259779913Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=0" Jan 23 23:33:43.261066 containerd[1972]: time="2026-01-23T23:33:43.260994361Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:33:43.265962 containerd[1972]: time="2026-01-23T23:33:43.265672537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:33:43.267899 containerd[1972]: time="2026-01-23T23:33:43.267667609Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.20231855s" Jan 23 23:33:43.267899 containerd[1972]: time="2026-01-23T23:33:43.267723097Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Jan 23 23:33:43.268390 containerd[1972]: time="2026-01-23T23:33:43.268277953Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 23:33:44.761033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1104277449.mount: Deactivated successfully. Jan 23 23:33:45.345690 containerd[1972]: time="2026-01-23T23:33:45.345633891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:33:45.348276 containerd[1972]: time="2026-01-23T23:33:45.348204927Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=9841285" Jan 23 23:33:45.349941 containerd[1972]: time="2026-01-23T23:33:45.349357443Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:33:45.353330 containerd[1972]: time="2026-01-23T23:33:45.353266527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:33:45.354653 containerd[1972]: time="2026-01-23T23:33:45.354606975Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 2.08627189s" Jan 23 23:33:45.354789 containerd[1972]: time="2026-01-23T23:33:45.354760731Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 23 23:33:45.356105 containerd[1972]: time="2026-01-23T23:33:45.356002143Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 23:33:45.872638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1976209225.mount: Deactivated successfully. Jan 23 23:33:46.989949 containerd[1972]: time="2026-01-23T23:33:46.989696539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:33:46.993162 containerd[1972]: time="2026-01-23T23:33:46.993073471Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=0" Jan 23 23:33:46.996931 containerd[1972]: time="2026-01-23T23:33:46.995365915Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:33:47.002673 containerd[1972]: time="2026-01-23T23:33:47.002612223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:33:47.004768 containerd[1972]: time="2026-01-23T23:33:47.004722075Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.648431464s" Jan 23 23:33:47.004942 containerd[1972]: time="2026-01-23T23:33:47.004896747Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jan 23 23:33:47.005648 containerd[1972]: time="2026-01-23T23:33:47.005613735Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 23:33:47.482631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount372343325.mount: Deactivated successfully. Jan 23 23:33:47.497345 containerd[1972]: time="2026-01-23T23:33:47.497293446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:33:47.499396 containerd[1972]: time="2026-01-23T23:33:47.499339566Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 23:33:47.501739 containerd[1972]: time="2026-01-23T23:33:47.501699222Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:33:47.507788 containerd[1972]: time="2026-01-23T23:33:47.507720438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:33:47.509287 containerd[1972]: time="2026-01-23T23:33:47.509239038Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 503.399751ms" Jan 23 23:33:47.509452 containerd[1972]: time="2026-01-23T23:33:47.509422398Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 23:33:47.510244 containerd[1972]: time="2026-01-23T23:33:47.510178338Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 23:33:47.882771 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 23:33:47.885353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:33:48.133854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2232150918.mount: Deactivated successfully. Jan 23 23:33:48.393544 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 23 23:33:48.393697 kernel: audit: type=1130 audit(1769211228.385:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:48.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:48.387440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:33:48.410523 (kubelet)[2733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:33:48.538949 kubelet[2733]: E0123 23:33:48.538575 2733 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:33:48.546842 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:33:48.547212 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:33:48.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 23:33:48.555313 systemd[1]: kubelet.service: Consumed 353ms CPU time, 107M memory peak. Jan 23 23:33:48.558031 kernel: audit: type=1131 audit(1769211228.548:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 23:33:50.250013 containerd[1972]: time="2026-01-23T23:33:50.249958592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:33:50.254623 containerd[1972]: time="2026-01-23T23:33:50.254550344Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=57926377" Jan 23 23:33:50.256559 containerd[1972]: time="2026-01-23T23:33:50.256494056Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:33:50.268936 containerd[1972]: time="2026-01-23T23:33:50.268825772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:33:50.274557 containerd[1972]: time="2026-01-23T23:33:50.274110800Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.763859478s" Jan 23 23:33:50.274557 containerd[1972]: time="2026-01-23T23:33:50.274176440Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jan 23 23:33:51.710403 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 23:33:51.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:51.721990 kernel: audit: type=1131 audit(1769211231.710:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:33:51.727000 audit: BPF prog-id=62 op=UNLOAD Jan 23 23:33:51.730946 kernel: audit: type=1334 audit(1769211231.727:293): prog-id=62 op=UNLOAD Jan 23 23:33:58.678518 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 23:33:58.681089 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:33:58.793874 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 23:33:58.794098 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 23:33:58.796080 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:33:58.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 23:33:58.804729 kernel: audit: type=1130 audit(1769211238.794:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 23:33:58.812308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:33:58.855277 systemd[1]: Reload requested from client PID 2818 ('systemctl') (unit session-8.scope)... Jan 23 23:33:58.855319 systemd[1]: Reloading... Jan 23 23:33:59.122987 zram_generator::config[2868]: No configuration found. Jan 23 23:33:59.619277 systemd[1]: Reloading finished in 763 ms. Jan 23 23:33:59.661000 audit: BPF prog-id=66 op=LOAD Jan 23 23:33:59.665000 audit: BPF prog-id=54 op=UNLOAD Jan 23 23:33:59.670500 kernel: audit: type=1334 audit(1769211239.661:295): prog-id=66 op=LOAD Jan 23 23:33:59.670615 kernel: audit: type=1334 audit(1769211239.665:296): prog-id=54 op=UNLOAD Jan 23 23:33:59.665000 audit: BPF prog-id=67 op=LOAD Jan 23 23:33:59.674545 kernel: audit: type=1334 audit(1769211239.665:297): prog-id=67 op=LOAD Jan 23 23:33:59.665000 audit: BPF prog-id=68 op=LOAD Jan 23 23:33:59.676875 kernel: audit: type=1334 audit(1769211239.665:298): prog-id=68 op=LOAD Jan 23 23:33:59.665000 audit: BPF prog-id=55 op=UNLOAD Jan 23 23:33:59.682143 kernel: audit: type=1334 audit(1769211239.665:299): prog-id=55 op=UNLOAD Jan 23 23:33:59.665000 audit: BPF prog-id=56 op=UNLOAD Jan 23 23:33:59.685467 kernel: audit: type=1334 audit(1769211239.665:300): prog-id=56 op=UNLOAD Jan 23 23:33:59.685609 kernel: audit: type=1334 audit(1769211239.670:301): prog-id=69 op=LOAD Jan 23 23:33:59.670000 audit: BPF prog-id=69 op=LOAD Jan 23 23:33:59.670000 audit: BPF prog-id=59 op=UNLOAD Jan 23 23:33:59.690514 kernel: audit: type=1334 audit(1769211239.670:302): prog-id=59 op=UNLOAD Jan 23 23:33:59.673000 audit: BPF prog-id=70 op=LOAD Jan 23 23:33:59.693514 kernel: audit: type=1334 audit(1769211239.673:303): prog-id=70 op=LOAD Jan 23 23:33:59.675000 audit: BPF prog-id=71 op=LOAD Jan 23 23:33:59.676000 audit: BPF prog-id=60 op=UNLOAD Jan 23 23:33:59.676000 audit: BPF prog-id=61 op=UNLOAD Jan 23 23:33:59.681000 audit: BPF prog-id=72 op=LOAD Jan 23 23:33:59.681000 audit: BPF prog-id=46 op=UNLOAD Jan 23 23:33:59.681000 audit: BPF prog-id=73 op=LOAD Jan 23 23:33:59.681000 audit: BPF prog-id=74 op=LOAD Jan 23 23:33:59.681000 audit: BPF prog-id=47 op=UNLOAD Jan 23 23:33:59.681000 audit: BPF prog-id=48 op=UNLOAD Jan 23 23:33:59.686000 audit: BPF prog-id=75 op=LOAD Jan 23 23:33:59.686000 audit: BPF prog-id=43 op=UNLOAD Jan 23 23:33:59.686000 audit: BPF prog-id=76 op=LOAD Jan 23 23:33:59.686000 audit: BPF prog-id=77 op=LOAD Jan 23 23:33:59.686000 audit: BPF prog-id=44 op=UNLOAD Jan 23 23:33:59.686000 audit: BPF prog-id=45 op=UNLOAD Jan 23 23:33:59.689000 audit: BPF prog-id=78 op=LOAD Jan 23 23:33:59.689000 audit: BPF prog-id=51 op=UNLOAD Jan 23 23:33:59.689000 audit: BPF prog-id=79 op=LOAD Jan 23 23:33:59.689000 audit: BPF prog-id=80 op=LOAD Jan 23 23:33:59.689000 audit: BPF prog-id=52 op=UNLOAD Jan 23 23:33:59.689000 audit: BPF prog-id=53 op=UNLOAD Jan 23 23:33:59.693000 audit: BPF prog-id=81 op=LOAD Jan 23 23:33:59.693000 audit: BPF prog-id=57 op=UNLOAD Jan 23 23:33:59.697000 audit: BPF prog-id=82 op=LOAD Jan 23 23:33:59.697000 audit: BPF prog-id=58 op=UNLOAD Jan 23 23:33:59.700000 audit: BPF prog-id=83 op=LOAD Jan 23 23:33:59.700000 audit: BPF prog-id=65 op=UNLOAD Jan 23 23:33:59.701000 audit: BPF prog-id=84 op=LOAD Jan 23 23:33:59.701000 audit: BPF prog-id=85 op=LOAD Jan 23 23:33:59.701000 audit: BPF prog-id=49 op=UNLOAD Jan 23 23:33:59.701000 audit: BPF prog-id=50 op=UNLOAD Jan 23 23:33:59.729820 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 23:33:59.730884 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 23:33:59.732999 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:33:59.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 23:33:59.733120 systemd[1]: kubelet.service: Consumed 233ms CPU time, 95.1M memory peak. Jan 23 23:33:59.736843 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:34:00.102954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:34:00.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:34:00.120407 (kubelet)[2929]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:34:00.191375 kubelet[2929]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:34:00.191375 kubelet[2929]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:34:00.191375 kubelet[2929]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:34:00.191948 kubelet[2929]: I0123 23:34:00.191474 2929 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:34:00.997976 kubelet[2929]: I0123 23:34:00.997162 2929 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 23:34:00.997976 kubelet[2929]: I0123 23:34:00.997223 2929 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:34:00.998521 kubelet[2929]: I0123 23:34:00.998476 2929 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 23:34:01.057220 kubelet[2929]: E0123 23:34:01.057157 2929 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.23.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.100:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 23:34:01.064378 kubelet[2929]: I0123 23:34:01.064321 2929 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:34:01.080765 kubelet[2929]: I0123 23:34:01.080733 2929 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 23:34:01.088294 kubelet[2929]: I0123 23:34:01.088257 2929 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:34:01.094459 kubelet[2929]: I0123 23:34:01.094412 2929 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:34:01.094848 kubelet[2929]: I0123 23:34:01.094595 2929 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-100","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:34:01.095269 kubelet[2929]: I0123 23:34:01.095245 2929 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:34:01.095371 kubelet[2929]: I0123 23:34:01.095353 2929 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 23:34:01.095766 kubelet[2929]: I0123 23:34:01.095747 2929 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:34:01.103340 kubelet[2929]: I0123 23:34:01.103304 2929 kubelet.go:480] "Attempting to sync node with API server" Jan 23 23:34:01.103585 kubelet[2929]: I0123 23:34:01.103470 2929 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:34:01.106854 kubelet[2929]: I0123 23:34:01.106823 2929 kubelet.go:386] "Adding apiserver pod source" Jan 23 23:34:01.108946 kubelet[2929]: I0123 23:34:01.107037 2929 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:34:01.112041 kubelet[2929]: E0123 23:34:01.111968 2929 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.23.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-100&limit=500&resourceVersion=0\": dial tcp 172.31.23.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 23:34:01.112797 kubelet[2929]: E0123 23:34:01.112734 2929 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.23.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 23:34:01.113443 kubelet[2929]: I0123 23:34:01.113351 2929 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 23:34:01.114667 kubelet[2929]: I0123 23:34:01.114628 2929 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 23:34:01.114887 kubelet[2929]: W0123 23:34:01.114857 2929 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 23:34:01.121543 kubelet[2929]: I0123 23:34:01.121499 2929 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:34:01.121674 kubelet[2929]: I0123 23:34:01.121567 2929 server.go:1289] "Started kubelet" Jan 23 23:34:01.137516 kubelet[2929]: I0123 23:34:01.137422 2929 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:34:01.138987 kubelet[2929]: E0123 23:34:01.135504 2929 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.100:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.100:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-100.188d804007ad3fe2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-100,UID:ip-172-31-23-100,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-100,},FirstTimestamp:2026-01-23 23:34:01.121529826 +0000 UTC m=+0.993702018,LastTimestamp:2026-01-23 23:34:01.121529826 +0000 UTC m=+0.993702018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-100,}" Jan 23 23:34:01.147166 kubelet[2929]: I0123 23:34:01.142245 2929 server.go:317] "Adding debug handlers to kubelet server" Jan 23 23:34:01.148849 kubelet[2929]: I0123 23:34:01.142370 2929 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:34:01.149259 kubelet[2929]: I0123 23:34:01.142333 2929 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:34:01.150222 kubelet[2929]: I0123 23:34:01.143177 2929 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:34:01.151936 kubelet[2929]: I0123 23:34:01.150383 2929 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:34:01.151936 kubelet[2929]: E0123 23:34:01.151268 2929 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-100\" not found" Jan 23 23:34:01.152258 kubelet[2929]: I0123 23:34:01.152212 2929 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:34:01.152354 kubelet[2929]: I0123 23:34:01.152324 2929 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:34:01.154648 kubelet[2929]: I0123 23:34:01.154601 2929 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:34:01.155290 kubelet[2929]: E0123 23:34:01.155251 2929 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.23.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 23:34:01.155608 kubelet[2929]: E0123 23:34:01.155569 2929 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-100?timeout=10s\": dial tcp 172.31.23.100:6443: connect: connection refused" interval="200ms" Jan 23 23:34:01.157901 kubelet[2929]: I0123 23:34:01.157858 2929 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:34:01.158844 kubelet[2929]: I0123 23:34:01.158811 2929 factory.go:223] Registration of the containerd container factory successfully Jan 23 23:34:01.159033 kubelet[2929]: I0123 23:34:01.159014 2929 factory.go:223] Registration of the systemd container factory successfully Jan 23 23:34:01.162000 audit[2944]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:01.162000 audit[2944]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe70c0f30 a2=0 a3=0 items=0 ppid=2929 pid=2944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.162000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 23 23:34:01.169000 audit[2945]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:01.169000 audit[2945]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcbfe0ea0 a2=0 a3=0 items=0 ppid=2929 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.169000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 23 23:34:01.186285 kubelet[2929]: E0123 23:34:01.186235 2929 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:34:01.187000 audit[2952]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:01.187000 audit[2952]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe6a72ca0 a2=0 a3=0 items=0 ppid=2929 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.187000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 23:34:01.194000 audit[2954]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:01.194000 audit[2954]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc7546aa0 a2=0 a3=0 items=0 ppid=2929 pid=2954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.194000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 23:34:01.200484 kubelet[2929]: I0123 23:34:01.200386 2929 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:34:01.201418 kubelet[2929]: I0123 23:34:01.201083 2929 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:34:01.201418 kubelet[2929]: I0123 23:34:01.201128 2929 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:34:01.206964 kubelet[2929]: I0123 23:34:01.206927 2929 policy_none.go:49] "None policy: Start" Jan 23 23:34:01.207165 kubelet[2929]: I0123 23:34:01.207144 2929 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:34:01.207278 kubelet[2929]: I0123 23:34:01.207260 2929 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:34:01.213000 audit[2957]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:01.213000 audit[2957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffffe405c40 a2=0 a3=0 items=0 ppid=2929 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.213000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 23 23:34:01.216518 kubelet[2929]: I0123 23:34:01.216470 2929 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 23:34:01.216000 audit[2960]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2960 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:01.216000 audit[2960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff7e31840 a2=0 a3=0 items=0 ppid=2929 pid=2960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.216000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 23 23:34:01.220638 kubelet[2929]: I0123 23:34:01.220596 2929 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 23:34:01.220831 kubelet[2929]: I0123 23:34:01.220760 2929 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 23:34:01.220831 kubelet[2929]: I0123 23:34:01.220799 2929 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:34:01.221404 kubelet[2929]: I0123 23:34:01.221236 2929 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 23:34:01.221562 kubelet[2929]: E0123 23:34:01.221503 2929 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:34:01.224070 kubelet[2929]: E0123 23:34:01.223875 2929 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.23.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 23:34:01.223000 audit[2959]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:01.223000 audit[2959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff55510a0 a2=0 a3=0 items=0 ppid=2929 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.223000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 23 23:34:01.223000 audit[2961]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2961 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:01.223000 audit[2961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd513d250 a2=0 a3=0 items=0 ppid=2929 pid=2961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.223000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 23 23:34:01.227000 audit[2962]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:01.227000 audit[2962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff7e3cdd0 a2=0 a3=0 items=0 ppid=2929 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.227000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 23 23:34:01.232053 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 23:34:01.233000 audit[2963]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2963 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:01.233000 audit[2963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcb7bce30 a2=0 a3=0 items=0 ppid=2929 pid=2963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.233000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 23 23:34:01.234000 audit[2964]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2964 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:01.234000 audit[2964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffb70e6a0 a2=0 a3=0 items=0 ppid=2929 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.234000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 23 23:34:01.236000 audit[2965]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2965 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:01.236000 audit[2965]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc2de5f50 a2=0 a3=0 items=0 ppid=2929 pid=2965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.236000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 23 23:34:01.248849 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 23:34:01.253005 kubelet[2929]: E0123 23:34:01.251404 2929 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-100\" not found" Jan 23 23:34:01.258193 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 23:34:01.270578 kubelet[2929]: E0123 23:34:01.270542 2929 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 23:34:01.271025 kubelet[2929]: I0123 23:34:01.271001 2929 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:34:01.271692 kubelet[2929]: I0123 23:34:01.271640 2929 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:34:01.272428 kubelet[2929]: I0123 23:34:01.272292 2929 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:34:01.276205 kubelet[2929]: E0123 23:34:01.276108 2929 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:34:01.276205 kubelet[2929]: E0123 23:34:01.276175 2929 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-100\" not found" Jan 23 23:34:01.347965 systemd[1]: Created slice kubepods-burstable-podeb66f559761c65e149209f90bbf61959.slice - libcontainer container kubepods-burstable-podeb66f559761c65e149209f90bbf61959.slice. Jan 23 23:34:01.353102 kubelet[2929]: I0123 23:34:01.353060 2929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/37115ed1ce6aba6cc45b19a5774c39f6-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-100\" (UID: \"37115ed1ce6aba6cc45b19a5774c39f6\") " pod="kube-system/kube-scheduler-ip-172-31-23-100" Jan 23 23:34:01.353339 kubelet[2929]: I0123 23:34:01.353312 2929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb66f559761c65e149209f90bbf61959-ca-certs\") pod \"kube-apiserver-ip-172-31-23-100\" (UID: \"eb66f559761c65e149209f90bbf61959\") " pod="kube-system/kube-apiserver-ip-172-31-23-100" Jan 23 23:34:01.353490 kubelet[2929]: I0123 23:34:01.353464 2929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb66f559761c65e149209f90bbf61959-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-100\" (UID: \"eb66f559761c65e149209f90bbf61959\") " pod="kube-system/kube-apiserver-ip-172-31-23-100" Jan 23 23:34:01.353629 kubelet[2929]: I0123 23:34:01.353604 2929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb66f559761c65e149209f90bbf61959-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-100\" (UID: \"eb66f559761c65e149209f90bbf61959\") " pod="kube-system/kube-apiserver-ip-172-31-23-100" Jan 23 23:34:01.353780 kubelet[2929]: I0123 23:34:01.353757 2929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ba833457ef79647ade377797320728a6-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-100\" (UID: \"ba833457ef79647ade377797320728a6\") " pod="kube-system/kube-controller-manager-ip-172-31-23-100" Jan 23 23:34:01.353936 kubelet[2929]: I0123 23:34:01.353895 2929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ba833457ef79647ade377797320728a6-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-100\" (UID: \"ba833457ef79647ade377797320728a6\") " pod="kube-system/kube-controller-manager-ip-172-31-23-100" Jan 23 23:34:01.354077 kubelet[2929]: I0123 23:34:01.354041 2929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba833457ef79647ade377797320728a6-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-100\" (UID: \"ba833457ef79647ade377797320728a6\") " pod="kube-system/kube-controller-manager-ip-172-31-23-100" Jan 23 23:34:01.354234 kubelet[2929]: I0123 23:34:01.354209 2929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ba833457ef79647ade377797320728a6-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-100\" (UID: \"ba833457ef79647ade377797320728a6\") " pod="kube-system/kube-controller-manager-ip-172-31-23-100" Jan 23 23:34:01.358843 kubelet[2929]: E0123 23:34:01.358415 2929 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-100\" not found" node="ip-172-31-23-100" Jan 23 23:34:01.359160 kubelet[2929]: E0123 23:34:01.359114 2929 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-100?timeout=10s\": dial tcp 172.31.23.100:6443: connect: connection refused" interval="400ms" Jan 23 23:34:01.363263 systemd[1]: Created slice kubepods-burstable-podba833457ef79647ade377797320728a6.slice - libcontainer container kubepods-burstable-podba833457ef79647ade377797320728a6.slice. Jan 23 23:34:01.368969 kubelet[2929]: E0123 23:34:01.368892 2929 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-100\" not found" node="ip-172-31-23-100" Jan 23 23:34:01.375407 systemd[1]: Created slice kubepods-burstable-pod37115ed1ce6aba6cc45b19a5774c39f6.slice - libcontainer container kubepods-burstable-pod37115ed1ce6aba6cc45b19a5774c39f6.slice. Jan 23 23:34:01.380688 kubelet[2929]: I0123 23:34:01.380003 2929 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-100" Jan 23 23:34:01.380688 kubelet[2929]: E0123 23:34:01.380334 2929 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-100\" not found" node="ip-172-31-23-100" Jan 23 23:34:01.380688 kubelet[2929]: E0123 23:34:01.380639 2929 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.100:6443/api/v1/nodes\": dial tcp 172.31.23.100:6443: connect: connection refused" node="ip-172-31-23-100" Jan 23 23:34:01.455668 kubelet[2929]: I0123 23:34:01.455611 2929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ba833457ef79647ade377797320728a6-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-100\" (UID: \"ba833457ef79647ade377797320728a6\") " pod="kube-system/kube-controller-manager-ip-172-31-23-100" Jan 23 23:34:01.583387 kubelet[2929]: I0123 23:34:01.583250 2929 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-100" Jan 23 23:34:01.583749 kubelet[2929]: E0123 23:34:01.583693 2929 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.100:6443/api/v1/nodes\": dial tcp 172.31.23.100:6443: connect: connection refused" node="ip-172-31-23-100" Jan 23 23:34:01.659925 containerd[1972]: time="2026-01-23T23:34:01.659814776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-100,Uid:eb66f559761c65e149209f90bbf61959,Namespace:kube-system,Attempt:0,}" Jan 23 23:34:01.670983 containerd[1972]: time="2026-01-23T23:34:01.670774052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-100,Uid:ba833457ef79647ade377797320728a6,Namespace:kube-system,Attempt:0,}" Jan 23 23:34:01.698506 containerd[1972]: time="2026-01-23T23:34:01.698418788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-100,Uid:37115ed1ce6aba6cc45b19a5774c39f6,Namespace:kube-system,Attempt:0,}" Jan 23 23:34:01.709337 containerd[1972]: time="2026-01-23T23:34:01.709197080Z" level=info msg="connecting to shim ff8c31c78b68b1d7b1a837c8022fd7145911974a242bfd99a227fad63a6f4722" address="unix:///run/containerd/s/56d33261e0c129343f0262091ddd3e76b4388b2a4564830c482f4aa83cdf4258" namespace=k8s.io protocol=ttrpc version=3 Jan 23 23:34:01.734944 containerd[1972]: time="2026-01-23T23:34:01.729962325Z" level=info msg="connecting to shim 6796e0df6930277bc24b6fac183683dda84ea81e0514e8aa87a907d83ddadeb0" address="unix:///run/containerd/s/94ade9ea2a97b0905768cafe7900b607b4fbe73dcf3ec5273f3537b8cd9d702d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 23:34:01.760609 kubelet[2929]: E0123 23:34:01.760527 2929 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-100?timeout=10s\": dial tcp 172.31.23.100:6443: connect: connection refused" interval="800ms" Jan 23 23:34:01.760735 containerd[1972]: time="2026-01-23T23:34:01.760533909Z" level=info msg="connecting to shim 5e1453e964178b4d0ced8b5d6ce2d54fe9cd24f0a9435afa4946fbf7aa0fc26f" address="unix:///run/containerd/s/02ff2167aa972a46f17e220f8bb49bcf883e33f9e5fefbe87a7883e523bd8e24" namespace=k8s.io protocol=ttrpc version=3 Jan 23 23:34:01.810317 systemd[1]: Started cri-containerd-ff8c31c78b68b1d7b1a837c8022fd7145911974a242bfd99a227fad63a6f4722.scope - libcontainer container ff8c31c78b68b1d7b1a837c8022fd7145911974a242bfd99a227fad63a6f4722. Jan 23 23:34:01.849231 systemd[1]: Started cri-containerd-5e1453e964178b4d0ced8b5d6ce2d54fe9cd24f0a9435afa4946fbf7aa0fc26f.scope - libcontainer container 5e1453e964178b4d0ced8b5d6ce2d54fe9cd24f0a9435afa4946fbf7aa0fc26f. Jan 23 23:34:01.854329 systemd[1]: Started cri-containerd-6796e0df6930277bc24b6fac183683dda84ea81e0514e8aa87a907d83ddadeb0.scope - libcontainer container 6796e0df6930277bc24b6fac183683dda84ea81e0514e8aa87a907d83ddadeb0. Jan 23 23:34:01.867000 audit: BPF prog-id=86 op=LOAD Jan 23 23:34:01.871000 audit: BPF prog-id=87 op=LOAD Jan 23 23:34:01.871000 audit[2994]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2974 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.871000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666386333316337386236386231643762316138333763383032326664 Jan 23 23:34:01.871000 audit: BPF prog-id=87 op=UNLOAD Jan 23 23:34:01.871000 audit[2994]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2974 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.871000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666386333316337386236386231643762316138333763383032326664 Jan 23 23:34:01.873000 audit: BPF prog-id=88 op=LOAD Jan 23 23:34:01.873000 audit[2994]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2974 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666386333316337386236386231643762316138333763383032326664 Jan 23 23:34:01.873000 audit: BPF prog-id=89 op=LOAD Jan 23 23:34:01.873000 audit[2994]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2974 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666386333316337386236386231643762316138333763383032326664 Jan 23 23:34:01.873000 audit: BPF prog-id=89 op=UNLOAD Jan 23 23:34:01.873000 audit[2994]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2974 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666386333316337386236386231643762316138333763383032326664 Jan 23 23:34:01.874000 audit: BPF prog-id=88 op=UNLOAD Jan 23 23:34:01.874000 audit[2994]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2974 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666386333316337386236386231643762316138333763383032326664 Jan 23 23:34:01.874000 audit: BPF prog-id=90 op=LOAD Jan 23 23:34:01.874000 audit[2994]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2974 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666386333316337386236386231643762316138333763383032326664 Jan 23 23:34:01.892000 audit: BPF prog-id=91 op=LOAD Jan 23 23:34:01.894000 audit: BPF prog-id=92 op=LOAD Jan 23 23:34:01.894000 audit[3024]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=2992 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.894000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637393665306466363933303237376263323462366661633138333638 Jan 23 23:34:01.895000 audit: BPF prog-id=92 op=UNLOAD Jan 23 23:34:01.895000 audit[3024]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2992 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637393665306466363933303237376263323462366661633138333638 Jan 23 23:34:01.896000 audit: BPF prog-id=93 op=LOAD Jan 23 23:34:01.896000 audit[3024]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=2992 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.896000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637393665306466363933303237376263323462366661633138333638 Jan 23 23:34:01.896000 audit: BPF prog-id=94 op=LOAD Jan 23 23:34:01.896000 audit[3024]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=2992 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.896000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637393665306466363933303237376263323462366661633138333638 Jan 23 23:34:01.897000 audit: BPF prog-id=94 op=UNLOAD Jan 23 23:34:01.897000 audit[3024]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2992 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637393665306466363933303237376263323462366661633138333638 Jan 23 23:34:01.897000 audit: BPF prog-id=93 op=UNLOAD Jan 23 23:34:01.897000 audit[3024]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2992 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637393665306466363933303237376263323462366661633138333638 Jan 23 23:34:01.897000 audit: BPF prog-id=95 op=LOAD Jan 23 23:34:01.897000 audit[3024]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=2992 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637393665306466363933303237376263323462366661633138333638 Jan 23 23:34:01.908000 audit: BPF prog-id=96 op=LOAD Jan 23 23:34:01.908000 audit: BPF prog-id=97 op=LOAD Jan 23 23:34:01.908000 audit[3033]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3013 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565313435336539363431373862346430636564386235643663653264 Jan 23 23:34:01.909000 audit: BPF prog-id=97 op=UNLOAD Jan 23 23:34:01.909000 audit[3033]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3013 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.909000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565313435336539363431373862346430636564386235643663653264 Jan 23 23:34:01.909000 audit: BPF prog-id=98 op=LOAD Jan 23 23:34:01.909000 audit[3033]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3013 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.909000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565313435336539363431373862346430636564386235643663653264 Jan 23 23:34:01.909000 audit: BPF prog-id=99 op=LOAD Jan 23 23:34:01.909000 audit[3033]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3013 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.909000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565313435336539363431373862346430636564386235643663653264 Jan 23 23:34:01.910000 audit: BPF prog-id=99 op=UNLOAD Jan 23 23:34:01.910000 audit[3033]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3013 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.910000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565313435336539363431373862346430636564386235643663653264 Jan 23 23:34:01.910000 audit: BPF prog-id=98 op=UNLOAD Jan 23 23:34:01.910000 audit[3033]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3013 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.910000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565313435336539363431373862346430636564386235643663653264 Jan 23 23:34:01.910000 audit: BPF prog-id=100 op=LOAD Jan 23 23:34:01.910000 audit[3033]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3013 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:01.910000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565313435336539363431373862346430636564386235643663653264 Jan 23 23:34:01.934944 kubelet[2929]: E0123 23:34:01.934466 2929 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.100:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.100:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-100.188d804007ad3fe2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-100,UID:ip-172-31-23-100,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-100,},FirstTimestamp:2026-01-23 23:34:01.121529826 +0000 UTC m=+0.993702018,LastTimestamp:2026-01-23 23:34:01.121529826 +0000 UTC m=+0.993702018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-100,}" Jan 23 23:34:01.994511 kubelet[2929]: I0123 23:34:01.993033 2929 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-100" Jan 23 23:34:01.994511 kubelet[2929]: E0123 23:34:01.993524 2929 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.100:6443/api/v1/nodes\": dial tcp 172.31.23.100:6443: connect: connection refused" node="ip-172-31-23-100" Jan 23 23:34:02.007422 containerd[1972]: time="2026-01-23T23:34:02.007326330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-100,Uid:ba833457ef79647ade377797320728a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6796e0df6930277bc24b6fac183683dda84ea81e0514e8aa87a907d83ddadeb0\"" Jan 23 23:34:02.007724 containerd[1972]: time="2026-01-23T23:34:02.007661802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-100,Uid:eb66f559761c65e149209f90bbf61959,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff8c31c78b68b1d7b1a837c8022fd7145911974a242bfd99a227fad63a6f4722\"" Jan 23 23:34:02.019372 containerd[1972]: time="2026-01-23T23:34:02.019277922Z" level=info msg="CreateContainer within sandbox \"ff8c31c78b68b1d7b1a837c8022fd7145911974a242bfd99a227fad63a6f4722\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 23:34:02.021553 containerd[1972]: time="2026-01-23T23:34:02.021491274Z" level=info msg="CreateContainer within sandbox \"6796e0df6930277bc24b6fac183683dda84ea81e0514e8aa87a907d83ddadeb0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 23:34:02.035004 containerd[1972]: time="2026-01-23T23:34:02.034952442Z" level=info msg="Container 6e1ef5e60deb6c6176ba9c892680c18adf981fb363569ab1bf8f6725011a96af: CDI devices from CRI Config.CDIDevices: []" Jan 23 23:34:02.043968 containerd[1972]: time="2026-01-23T23:34:02.043901586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-100,Uid:37115ed1ce6aba6cc45b19a5774c39f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e1453e964178b4d0ced8b5d6ce2d54fe9cd24f0a9435afa4946fbf7aa0fc26f\"" Jan 23 23:34:02.054193 containerd[1972]: time="2026-01-23T23:34:02.054145566Z" level=info msg="CreateContainer within sandbox \"5e1453e964178b4d0ced8b5d6ce2d54fe9cd24f0a9435afa4946fbf7aa0fc26f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 23:34:02.056109 containerd[1972]: time="2026-01-23T23:34:02.056044266Z" level=info msg="CreateContainer within sandbox \"ff8c31c78b68b1d7b1a837c8022fd7145911974a242bfd99a227fad63a6f4722\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6e1ef5e60deb6c6176ba9c892680c18adf981fb363569ab1bf8f6725011a96af\"" Jan 23 23:34:02.059260 containerd[1972]: time="2026-01-23T23:34:02.059196702Z" level=info msg="StartContainer for \"6e1ef5e60deb6c6176ba9c892680c18adf981fb363569ab1bf8f6725011a96af\"" Jan 23 23:34:02.062539 containerd[1972]: time="2026-01-23T23:34:02.062176206Z" level=info msg="connecting to shim 6e1ef5e60deb6c6176ba9c892680c18adf981fb363569ab1bf8f6725011a96af" address="unix:///run/containerd/s/56d33261e0c129343f0262091ddd3e76b4388b2a4564830c482f4aa83cdf4258" protocol=ttrpc version=3 Jan 23 23:34:02.062539 containerd[1972]: time="2026-01-23T23:34:02.062298594Z" level=info msg="Container b5e47d084ef7ddfd8e1ebb61915cacf5bb9d31adc038d4d937ff1aeea6aab648: CDI devices from CRI Config.CDIDevices: []" Jan 23 23:34:02.064325 kubelet[2929]: E0123 23:34:02.064245 2929 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.23.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 23:34:02.075068 kubelet[2929]: E0123 23:34:02.074981 2929 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.23.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 23:34:02.078499 containerd[1972]: time="2026-01-23T23:34:02.078427446Z" level=info msg="Container 4d8b832f477ad891f86dcf68a5d519dcb817d2deebcb281ec1ee3888b82f6167: CDI devices from CRI Config.CDIDevices: []" Jan 23 23:34:02.083500 containerd[1972]: time="2026-01-23T23:34:02.083448990Z" level=info msg="CreateContainer within sandbox \"6796e0df6930277bc24b6fac183683dda84ea81e0514e8aa87a907d83ddadeb0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b5e47d084ef7ddfd8e1ebb61915cacf5bb9d31adc038d4d937ff1aeea6aab648\"" Jan 23 23:34:02.086306 containerd[1972]: time="2026-01-23T23:34:02.086179338Z" level=info msg="StartContainer for \"b5e47d084ef7ddfd8e1ebb61915cacf5bb9d31adc038d4d937ff1aeea6aab648\"" Jan 23 23:34:02.089295 containerd[1972]: time="2026-01-23T23:34:02.089215566Z" level=info msg="connecting to shim b5e47d084ef7ddfd8e1ebb61915cacf5bb9d31adc038d4d937ff1aeea6aab648" address="unix:///run/containerd/s/94ade9ea2a97b0905768cafe7900b607b4fbe73dcf3ec5273f3537b8cd9d702d" protocol=ttrpc version=3 Jan 23 23:34:02.105878 containerd[1972]: time="2026-01-23T23:34:02.103578426Z" level=info msg="CreateContainer within sandbox \"5e1453e964178b4d0ced8b5d6ce2d54fe9cd24f0a9435afa4946fbf7aa0fc26f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4d8b832f477ad891f86dcf68a5d519dcb817d2deebcb281ec1ee3888b82f6167\"" Jan 23 23:34:02.106340 containerd[1972]: time="2026-01-23T23:34:02.106276686Z" level=info msg="StartContainer for \"4d8b832f477ad891f86dcf68a5d519dcb817d2deebcb281ec1ee3888b82f6167\"" Jan 23 23:34:02.112368 containerd[1972]: time="2026-01-23T23:34:02.112257102Z" level=info msg="connecting to shim 4d8b832f477ad891f86dcf68a5d519dcb817d2deebcb281ec1ee3888b82f6167" address="unix:///run/containerd/s/02ff2167aa972a46f17e220f8bb49bcf883e33f9e5fefbe87a7883e523bd8e24" protocol=ttrpc version=3 Jan 23 23:34:02.123774 systemd[1]: Started cri-containerd-6e1ef5e60deb6c6176ba9c892680c18adf981fb363569ab1bf8f6725011a96af.scope - libcontainer container 6e1ef5e60deb6c6176ba9c892680c18adf981fb363569ab1bf8f6725011a96af. Jan 23 23:34:02.157309 systemd[1]: Started cri-containerd-b5e47d084ef7ddfd8e1ebb61915cacf5bb9d31adc038d4d937ff1aeea6aab648.scope - libcontainer container b5e47d084ef7ddfd8e1ebb61915cacf5bb9d31adc038d4d937ff1aeea6aab648. Jan 23 23:34:02.183752 systemd[1]: Started cri-containerd-4d8b832f477ad891f86dcf68a5d519dcb817d2deebcb281ec1ee3888b82f6167.scope - libcontainer container 4d8b832f477ad891f86dcf68a5d519dcb817d2deebcb281ec1ee3888b82f6167. Jan 23 23:34:02.190000 audit: BPF prog-id=101 op=LOAD Jan 23 23:34:02.192000 audit: BPF prog-id=102 op=LOAD Jan 23 23:34:02.192000 audit[3103]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=2974 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:02.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665316566356536306465623663363137366261396338393236383063 Jan 23 23:34:02.193000 audit: BPF prog-id=102 op=UNLOAD Jan 23 23:34:02.193000 audit[3103]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2974 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:02.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665316566356536306465623663363137366261396338393236383063 Jan 23 23:34:02.193000 audit: BPF prog-id=103 op=LOAD Jan 23 23:34:02.193000 audit[3103]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=2974 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:02.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665316566356536306465623663363137366261396338393236383063 Jan 23 23:34:02.194000 audit: BPF prog-id=104 op=LOAD Jan 23 23:34:02.194000 audit[3103]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=2974 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:02.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665316566356536306465623663363137366261396338393236383063 Jan 23 23:34:02.195000 audit: BPF prog-id=104 op=UNLOAD Jan 23 23:34:02.195000 audit[3103]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2974 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:02.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665316566356536306465623663363137366261396338393236383063 Jan 23 23:34:02.196000 audit: BPF prog-id=103 op=UNLOAD Jan 23 23:34:02.196000 audit[3103]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2974 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:02.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665316566356536306465623663363137366261396338393236383063 Jan 23 23:34:02.196000 audit: BPF prog-id=105 op=LOAD Jan 23 23:34:02.196000 audit[3103]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=2974 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:02.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665316566356536306465623663363137366261396338393236383063 Jan 23 23:34:02.202000 audit: BPF prog-id=106 op=LOAD Jan 23 23:34:02.203000 audit: BPF prog-id=107 op=LOAD Jan 23 23:34:02.203000 audit[3114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=2992 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:02.203000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235653437643038346566376464666438653165626236313931356361 Jan 23 23:34:02.203000 audit: BPF prog-id=107 op=UNLOAD Jan 23 23:34:02.203000 audit[3114]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2992 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:02.203000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235653437643038346566376464666438653165626236313931356361 Jan 23 23:34:02.204000 audit: BPF prog-id=108 op=LOAD Jan 23 23:34:02.204000 audit[3114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=2992 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:02.204000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235653437643038346566376464666438653165626236313931356361 Jan 23 23:34:02.205000 audit: BPF prog-id=109 op=LOAD Jan 23 23:34:02.205000 audit[3114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=2992 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:02.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235653437643038346566376464666438653165626236313931356361 Jan 23 23:34:02.205000 audit: BPF prog-id=109 op=UNLOAD Jan 23 23:34:02.205000 audit[3114]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2992 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:02.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235653437643038346566376464666438653165626236313931356361 Jan 23 23:34:02.206000 audit: BPF prog-id=108 op=UNLOAD Jan 23 23:34:02.206000 audit[3114]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2992 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:02.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235653437643038346566376464666438653165626236313931356361 Jan 23 23:34:02.206000 audit: BPF prog-id=110 op=LOAD Jan 23 23:34:02.206000 audit[3114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=2992 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:02.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235653437643038346566376464666438653165626236313931356361 Jan 23 23:34:02.266000 audit: BPF prog-id=111 op=LOAD Jan 23 23:34:02.267000 audit: BPF prog-id=112 op=LOAD Jan 23 23:34:02.267000 audit[3121]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3013 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:02.267000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464386238333266343737616438393166383664636636386135643531 Jan 23 23:34:02.267000 audit: BPF prog-id=112 op=UNLOAD Jan 23 23:34:02.267000 audit[3121]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3013 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:02.267000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464386238333266343737616438393166383664636636386135643531 Jan 23 23:34:02.268000 audit: BPF prog-id=113 op=LOAD Jan 23 23:34:02.268000 audit[3121]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3013 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:02.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464386238333266343737616438393166383664636636386135643531 Jan 23 23:34:02.269000 audit: BPF prog-id=114 op=LOAD Jan 23 23:34:02.269000 audit[3121]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3013 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:02.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464386238333266343737616438393166383664636636386135643531 Jan 23 23:34:02.271000 audit: BPF prog-id=114 op=UNLOAD Jan 23 23:34:02.271000 audit[3121]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3013 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:02.271000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464386238333266343737616438393166383664636636386135643531 Jan 23 23:34:02.271000 audit: BPF prog-id=113 op=UNLOAD Jan 23 23:34:02.271000 audit[3121]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3013 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:02.271000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464386238333266343737616438393166383664636636386135643531 Jan 23 23:34:02.271000 audit: BPF prog-id=115 op=LOAD Jan 23 23:34:02.271000 audit[3121]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3013 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:02.271000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464386238333266343737616438393166383664636636386135643531 Jan 23 23:34:02.321469 containerd[1972]: time="2026-01-23T23:34:02.321388603Z" level=info msg="StartContainer for \"6e1ef5e60deb6c6176ba9c892680c18adf981fb363569ab1bf8f6725011a96af\" returns successfully" Jan 23 23:34:02.336561 containerd[1972]: time="2026-01-23T23:34:02.336423452Z" level=info msg="StartContainer for \"b5e47d084ef7ddfd8e1ebb61915cacf5bb9d31adc038d4d937ff1aeea6aab648\" returns successfully" Jan 23 23:34:02.403131 containerd[1972]: time="2026-01-23T23:34:02.402152756Z" level=info msg="StartContainer for \"4d8b832f477ad891f86dcf68a5d519dcb817d2deebcb281ec1ee3888b82f6167\" returns successfully" Jan 23 23:34:02.466729 kubelet[2929]: E0123 23:34:02.466648 2929 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.23.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-100&limit=500&resourceVersion=0\": dial tcp 172.31.23.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 23:34:02.798077 kubelet[2929]: I0123 23:34:02.797965 2929 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-100" Jan 23 23:34:03.273943 kubelet[2929]: E0123 23:34:03.273563 2929 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-100\" not found" node="ip-172-31-23-100" Jan 23 23:34:03.275890 kubelet[2929]: E0123 23:34:03.275842 2929 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-100\" not found" node="ip-172-31-23-100" Jan 23 23:34:03.284690 kubelet[2929]: E0123 23:34:03.284628 2929 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-100\" not found" node="ip-172-31-23-100" Jan 23 23:34:04.289608 kubelet[2929]: E0123 23:34:04.289347 2929 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-100\" not found" node="ip-172-31-23-100" Jan 23 23:34:04.289608 kubelet[2929]: E0123 23:34:04.289411 2929 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-100\" not found" node="ip-172-31-23-100" Jan 23 23:34:04.290779 kubelet[2929]: E0123 23:34:04.289842 2929 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-100\" not found" node="ip-172-31-23-100" Jan 23 23:34:04.911054 update_engine[1935]: I20260123 23:34:04.910966 1935 update_attempter.cc:509] Updating boot flags... Jan 23 23:34:05.297136 kubelet[2929]: E0123 23:34:05.296505 2929 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-100\" not found" node="ip-172-31-23-100" Jan 23 23:34:05.299347 kubelet[2929]: E0123 23:34:05.297017 2929 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-100\" not found" node="ip-172-31-23-100" Jan 23 23:34:06.263674 kubelet[2929]: E0123 23:34:06.263435 2929 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-100\" not found" node="ip-172-31-23-100" Jan 23 23:34:06.540418 kubelet[2929]: E0123 23:34:06.540302 2929 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-100\" not found" node="ip-172-31-23-100" Jan 23 23:34:06.833131 kubelet[2929]: E0123 23:34:06.832563 2929 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-100\" not found" node="ip-172-31-23-100" Jan 23 23:34:07.043952 kubelet[2929]: I0123 23:34:07.042665 2929 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-23-100" Jan 23 23:34:07.053289 kubelet[2929]: I0123 23:34:07.052474 2929 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-100" Jan 23 23:34:07.104242 kubelet[2929]: E0123 23:34:07.104109 2929 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-23-100\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-23-100" Jan 23 23:34:07.104462 kubelet[2929]: I0123 23:34:07.104437 2929 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-100" Jan 23 23:34:07.115856 kubelet[2929]: I0123 23:34:07.115813 2929 apiserver.go:52] "Watching apiserver" Jan 23 23:34:07.117979 kubelet[2929]: E0123 23:34:07.117936 2929 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-23-100\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-23-100" Jan 23 23:34:07.118385 kubelet[2929]: I0123 23:34:07.118180 2929 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-100" Jan 23 23:34:07.129105 kubelet[2929]: E0123 23:34:07.129041 2929 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-23-100\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-23-100" Jan 23 23:34:07.153020 kubelet[2929]: I0123 23:34:07.152966 2929 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:34:09.122847 systemd[1]: Reload requested from client PID 3307 ('systemctl') (unit session-8.scope)... Jan 23 23:34:09.122877 systemd[1]: Reloading... Jan 23 23:34:09.337960 zram_generator::config[3360]: No configuration found. Jan 23 23:34:09.846805 systemd[1]: Reloading finished in 723 ms. Jan 23 23:34:09.910865 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:34:09.930564 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 23:34:09.932987 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:34:09.937069 kernel: kauditd_printk_skb: 201 callbacks suppressed Jan 23 23:34:09.937159 kernel: audit: type=1131 audit(1769211249.932:397): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:34:09.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:34:09.933107 systemd[1]: kubelet.service: Consumed 1.705s CPU time, 126.8M memory peak. Jan 23 23:34:09.942162 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:34:09.949608 kernel: audit: type=1334 audit(1769211249.943:398): prog-id=116 op=LOAD Jan 23 23:34:09.949754 kernel: audit: type=1334 audit(1769211249.943:399): prog-id=81 op=UNLOAD Jan 23 23:34:09.943000 audit: BPF prog-id=116 op=LOAD Jan 23 23:34:09.943000 audit: BPF prog-id=81 op=UNLOAD Jan 23 23:34:09.952628 kernel: audit: type=1334 audit(1769211249.946:400): prog-id=117 op=LOAD Jan 23 23:34:09.946000 audit: BPF prog-id=117 op=LOAD Jan 23 23:34:09.956518 kernel: audit: type=1334 audit(1769211249.946:401): prog-id=69 op=UNLOAD Jan 23 23:34:09.946000 audit: BPF prog-id=69 op=UNLOAD Jan 23 23:34:09.948000 audit: BPF prog-id=118 op=LOAD Jan 23 23:34:09.948000 audit: BPF prog-id=119 op=LOAD Jan 23 23:34:09.960325 kernel: audit: type=1334 audit(1769211249.948:402): prog-id=118 op=LOAD Jan 23 23:34:09.960397 kernel: audit: type=1334 audit(1769211249.948:403): prog-id=119 op=LOAD Jan 23 23:34:09.962845 kernel: audit: type=1334 audit(1769211249.948:404): prog-id=70 op=UNLOAD Jan 23 23:34:09.948000 audit: BPF prog-id=70 op=UNLOAD Jan 23 23:34:09.948000 audit: BPF prog-id=71 op=UNLOAD Jan 23 23:34:09.964725 kernel: audit: type=1334 audit(1769211249.948:405): prog-id=71 op=UNLOAD Jan 23 23:34:09.950000 audit: BPF prog-id=120 op=LOAD Jan 23 23:34:09.950000 audit: BPF prog-id=78 op=UNLOAD Jan 23 23:34:09.951000 audit: BPF prog-id=121 op=LOAD Jan 23 23:34:09.951000 audit: BPF prog-id=122 op=LOAD Jan 23 23:34:09.951000 audit: BPF prog-id=79 op=UNLOAD Jan 23 23:34:09.951000 audit: BPF prog-id=80 op=UNLOAD Jan 23 23:34:09.966986 kernel: audit: type=1334 audit(1769211249.950:406): prog-id=120 op=LOAD Jan 23 23:34:09.953000 audit: BPF prog-id=123 op=LOAD Jan 23 23:34:09.953000 audit: BPF prog-id=66 op=UNLOAD Jan 23 23:34:09.953000 audit: BPF prog-id=124 op=LOAD Jan 23 23:34:09.955000 audit: BPF prog-id=125 op=LOAD Jan 23 23:34:09.955000 audit: BPF prog-id=67 op=UNLOAD Jan 23 23:34:09.955000 audit: BPF prog-id=68 op=UNLOAD Jan 23 23:34:09.961000 audit: BPF prog-id=126 op=LOAD Jan 23 23:34:09.961000 audit: BPF prog-id=83 op=UNLOAD Jan 23 23:34:09.965000 audit: BPF prog-id=127 op=LOAD Jan 23 23:34:09.965000 audit: BPF prog-id=75 op=UNLOAD Jan 23 23:34:09.965000 audit: BPF prog-id=128 op=LOAD Jan 23 23:34:09.965000 audit: BPF prog-id=129 op=LOAD Jan 23 23:34:09.965000 audit: BPF prog-id=76 op=UNLOAD Jan 23 23:34:09.966000 audit: BPF prog-id=77 op=UNLOAD Jan 23 23:34:09.968000 audit: BPF prog-id=130 op=LOAD Jan 23 23:34:09.968000 audit: BPF prog-id=72 op=UNLOAD Jan 23 23:34:09.968000 audit: BPF prog-id=131 op=LOAD Jan 23 23:34:09.968000 audit: BPF prog-id=132 op=LOAD Jan 23 23:34:09.968000 audit: BPF prog-id=73 op=UNLOAD Jan 23 23:34:09.968000 audit: BPF prog-id=74 op=UNLOAD Jan 23 23:34:09.969000 audit: BPF prog-id=133 op=LOAD Jan 23 23:34:09.970000 audit: BPF prog-id=134 op=LOAD Jan 23 23:34:09.970000 audit: BPF prog-id=84 op=UNLOAD Jan 23 23:34:09.970000 audit: BPF prog-id=85 op=UNLOAD Jan 23 23:34:09.973000 audit: BPF prog-id=135 op=LOAD Jan 23 23:34:09.973000 audit: BPF prog-id=82 op=UNLOAD Jan 23 23:34:10.305446 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:34:10.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:34:10.329773 (kubelet)[3413]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:34:10.431184 kubelet[3413]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:34:10.433054 kubelet[3413]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:34:10.433054 kubelet[3413]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:34:10.433054 kubelet[3413]: I0123 23:34:10.432209 3413 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:34:10.460879 kubelet[3413]: I0123 23:34:10.460835 3413 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 23:34:10.461243 kubelet[3413]: I0123 23:34:10.461191 3413 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:34:10.461939 kubelet[3413]: I0123 23:34:10.461853 3413 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 23:34:10.466269 kubelet[3413]: I0123 23:34:10.466231 3413 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 23:34:10.474606 kubelet[3413]: I0123 23:34:10.474557 3413 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:34:10.490179 kubelet[3413]: I0123 23:34:10.490054 3413 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 23:34:10.505942 kubelet[3413]: I0123 23:34:10.505135 3413 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:34:10.505942 kubelet[3413]: I0123 23:34:10.505610 3413 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:34:10.507414 kubelet[3413]: I0123 23:34:10.505692 3413 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-100","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:34:10.508424 kubelet[3413]: I0123 23:34:10.508386 3413 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:34:10.508580 kubelet[3413]: I0123 23:34:10.508561 3413 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 23:34:10.508809 kubelet[3413]: I0123 23:34:10.508771 3413 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:34:10.509977 kubelet[3413]: I0123 23:34:10.509900 3413 kubelet.go:480] "Attempting to sync node with API server" Jan 23 23:34:10.510202 kubelet[3413]: I0123 23:34:10.510181 3413 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:34:10.510356 kubelet[3413]: I0123 23:34:10.510336 3413 kubelet.go:386] "Adding apiserver pod source" Jan 23 23:34:10.510509 kubelet[3413]: I0123 23:34:10.510458 3413 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:34:10.520976 kubelet[3413]: I0123 23:34:10.520739 3413 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 23:34:10.523323 kubelet[3413]: I0123 23:34:10.522272 3413 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 23:34:10.530627 kubelet[3413]: I0123 23:34:10.530526 3413 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:34:10.530852 kubelet[3413]: I0123 23:34:10.530829 3413 server.go:1289] "Started kubelet" Jan 23 23:34:10.532354 kubelet[3413]: I0123 23:34:10.532304 3413 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:34:10.535937 kubelet[3413]: I0123 23:34:10.534486 3413 server.go:317] "Adding debug handlers to kubelet server" Jan 23 23:34:10.537897 kubelet[3413]: I0123 23:34:10.537279 3413 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:34:10.545622 kubelet[3413]: I0123 23:34:10.545562 3413 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:34:10.547936 kubelet[3413]: I0123 23:34:10.546999 3413 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:34:10.548784 kubelet[3413]: I0123 23:34:10.548395 3413 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:34:10.553038 kubelet[3413]: I0123 23:34:10.552871 3413 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:34:10.553183 kubelet[3413]: E0123 23:34:10.553108 3413 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-100\" not found" Jan 23 23:34:10.554170 kubelet[3413]: I0123 23:34:10.554113 3413 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:34:10.554369 kubelet[3413]: I0123 23:34:10.554337 3413 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:34:10.595942 kubelet[3413]: I0123 23:34:10.595065 3413 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 23:34:10.605506 kubelet[3413]: I0123 23:34:10.604642 3413 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 23:34:10.605506 kubelet[3413]: I0123 23:34:10.604685 3413 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 23:34:10.605506 kubelet[3413]: I0123 23:34:10.604740 3413 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:34:10.605506 kubelet[3413]: I0123 23:34:10.604756 3413 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 23:34:10.605506 kubelet[3413]: E0123 23:34:10.604829 3413 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:34:10.643653 kubelet[3413]: I0123 23:34:10.643436 3413 factory.go:223] Registration of the containerd container factory successfully Jan 23 23:34:10.646333 kubelet[3413]: I0123 23:34:10.643988 3413 factory.go:223] Registration of the systemd container factory successfully Jan 23 23:34:10.646333 kubelet[3413]: I0123 23:34:10.644188 3413 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:34:10.658564 kubelet[3413]: E0123 23:34:10.658505 3413 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:34:10.708269 kubelet[3413]: E0123 23:34:10.708009 3413 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 23:34:10.857233 kubelet[3413]: I0123 23:34:10.856847 3413 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:34:10.857233 kubelet[3413]: I0123 23:34:10.856873 3413 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:34:10.857233 kubelet[3413]: I0123 23:34:10.856939 3413 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:34:10.858667 kubelet[3413]: I0123 23:34:10.858192 3413 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 23:34:10.860851 kubelet[3413]: I0123 23:34:10.858949 3413 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 23:34:10.860851 kubelet[3413]: I0123 23:34:10.859006 3413 policy_none.go:49] "None policy: Start" Jan 23 23:34:10.860851 kubelet[3413]: I0123 23:34:10.859028 3413 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:34:10.860851 kubelet[3413]: I0123 23:34:10.859055 3413 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:34:10.860851 kubelet[3413]: I0123 23:34:10.859270 3413 state_mem.go:75] "Updated machine memory state" Jan 23 23:34:10.874794 kubelet[3413]: E0123 23:34:10.874342 3413 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 23:34:10.874794 kubelet[3413]: I0123 23:34:10.874625 3413 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:34:10.874794 kubelet[3413]: I0123 23:34:10.874643 3413 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:34:10.876282 kubelet[3413]: I0123 23:34:10.875941 3413 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:34:10.880246 kubelet[3413]: E0123 23:34:10.879584 3413 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:34:10.915257 kubelet[3413]: I0123 23:34:10.915153 3413 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-100" Jan 23 23:34:10.915936 kubelet[3413]: I0123 23:34:10.915866 3413 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-100" Jan 23 23:34:10.916683 kubelet[3413]: I0123 23:34:10.916644 3413 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-100" Jan 23 23:34:10.963544 kubelet[3413]: I0123 23:34:10.963477 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ba833457ef79647ade377797320728a6-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-100\" (UID: \"ba833457ef79647ade377797320728a6\") " pod="kube-system/kube-controller-manager-ip-172-31-23-100" Jan 23 23:34:10.963703 kubelet[3413]: I0123 23:34:10.963553 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ba833457ef79647ade377797320728a6-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-100\" (UID: \"ba833457ef79647ade377797320728a6\") " pod="kube-system/kube-controller-manager-ip-172-31-23-100" Jan 23 23:34:10.963703 kubelet[3413]: I0123 23:34:10.963599 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba833457ef79647ade377797320728a6-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-100\" (UID: \"ba833457ef79647ade377797320728a6\") " pod="kube-system/kube-controller-manager-ip-172-31-23-100" Jan 23 23:34:10.963703 kubelet[3413]: I0123 23:34:10.963636 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ba833457ef79647ade377797320728a6-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-100\" (UID: \"ba833457ef79647ade377797320728a6\") " pod="kube-system/kube-controller-manager-ip-172-31-23-100" Jan 23 23:34:10.963703 kubelet[3413]: I0123 23:34:10.963674 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/37115ed1ce6aba6cc45b19a5774c39f6-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-100\" (UID: \"37115ed1ce6aba6cc45b19a5774c39f6\") " pod="kube-system/kube-scheduler-ip-172-31-23-100" Jan 23 23:34:10.964017 kubelet[3413]: I0123 23:34:10.963715 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb66f559761c65e149209f90bbf61959-ca-certs\") pod \"kube-apiserver-ip-172-31-23-100\" (UID: \"eb66f559761c65e149209f90bbf61959\") " pod="kube-system/kube-apiserver-ip-172-31-23-100" Jan 23 23:34:10.964017 kubelet[3413]: I0123 23:34:10.963756 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb66f559761c65e149209f90bbf61959-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-100\" (UID: \"eb66f559761c65e149209f90bbf61959\") " pod="kube-system/kube-apiserver-ip-172-31-23-100" Jan 23 23:34:10.964017 kubelet[3413]: I0123 23:34:10.963802 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb66f559761c65e149209f90bbf61959-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-100\" (UID: \"eb66f559761c65e149209f90bbf61959\") " pod="kube-system/kube-apiserver-ip-172-31-23-100" Jan 23 23:34:10.964017 kubelet[3413]: I0123 23:34:10.963840 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ba833457ef79647ade377797320728a6-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-100\" (UID: \"ba833457ef79647ade377797320728a6\") " pod="kube-system/kube-controller-manager-ip-172-31-23-100" Jan 23 23:34:10.988326 kubelet[3413]: I0123 23:34:10.988211 3413 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-100" Jan 23 23:34:11.002937 kubelet[3413]: I0123 23:34:11.002682 3413 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-23-100" Jan 23 23:34:11.002937 kubelet[3413]: I0123 23:34:11.002804 3413 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-23-100" Jan 23 23:34:11.515322 kubelet[3413]: I0123 23:34:11.515248 3413 apiserver.go:52] "Watching apiserver" Jan 23 23:34:11.554757 kubelet[3413]: I0123 23:34:11.554705 3413 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:34:11.700996 kubelet[3413]: I0123 23:34:11.700858 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-100" podStartSLOduration=1.70068555 podStartE2EDuration="1.70068555s" podCreationTimestamp="2026-01-23 23:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:34:11.682179882 +0000 UTC m=+1.342141700" watchObservedRunningTime="2026-01-23 23:34:11.70068555 +0000 UTC m=+1.360647344" Jan 23 23:34:11.724961 kubelet[3413]: I0123 23:34:11.723235 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-100" podStartSLOduration=1.72321255 podStartE2EDuration="1.72321255s" podCreationTimestamp="2026-01-23 23:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:34:11.70210053 +0000 UTC m=+1.362062336" watchObservedRunningTime="2026-01-23 23:34:11.72321255 +0000 UTC m=+1.383174344" Jan 23 23:34:11.724961 kubelet[3413]: I0123 23:34:11.723368 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-100" podStartSLOduration=1.7233593219999999 podStartE2EDuration="1.723359322s" podCreationTimestamp="2026-01-23 23:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:34:11.721116042 +0000 UTC m=+1.381077812" watchObservedRunningTime="2026-01-23 23:34:11.723359322 +0000 UTC m=+1.383321128" Jan 23 23:34:15.114873 kubelet[3413]: I0123 23:34:15.114821 3413 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 23:34:15.116392 containerd[1972]: time="2026-01-23T23:34:15.116170951Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 23:34:15.117419 kubelet[3413]: I0123 23:34:15.116479 3413 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 23:34:15.850292 systemd[1]: Created slice kubepods-besteffort-pod35b8dc61_0882_4f04_b2fa_0faa3e66eb0d.slice - libcontainer container kubepods-besteffort-pod35b8dc61_0882_4f04_b2fa_0faa3e66eb0d.slice. Jan 23 23:34:15.899317 kubelet[3413]: I0123 23:34:15.899243 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/35b8dc61-0882-4f04-b2fa-0faa3e66eb0d-kube-proxy\") pod \"kube-proxy-qhf2d\" (UID: \"35b8dc61-0882-4f04-b2fa-0faa3e66eb0d\") " pod="kube-system/kube-proxy-qhf2d" Jan 23 23:34:15.899317 kubelet[3413]: I0123 23:34:15.899317 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35b8dc61-0882-4f04-b2fa-0faa3e66eb0d-lib-modules\") pod \"kube-proxy-qhf2d\" (UID: \"35b8dc61-0882-4f04-b2fa-0faa3e66eb0d\") " pod="kube-system/kube-proxy-qhf2d" Jan 23 23:34:15.899490 kubelet[3413]: I0123 23:34:15.899361 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2svn5\" (UniqueName: \"kubernetes.io/projected/35b8dc61-0882-4f04-b2fa-0faa3e66eb0d-kube-api-access-2svn5\") pod \"kube-proxy-qhf2d\" (UID: \"35b8dc61-0882-4f04-b2fa-0faa3e66eb0d\") " pod="kube-system/kube-proxy-qhf2d" Jan 23 23:34:15.899490 kubelet[3413]: I0123 23:34:15.899404 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35b8dc61-0882-4f04-b2fa-0faa3e66eb0d-xtables-lock\") pod \"kube-proxy-qhf2d\" (UID: \"35b8dc61-0882-4f04-b2fa-0faa3e66eb0d\") " pod="kube-system/kube-proxy-qhf2d" Jan 23 23:34:16.167290 containerd[1972]: time="2026-01-23T23:34:16.167157512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qhf2d,Uid:35b8dc61-0882-4f04-b2fa-0faa3e66eb0d,Namespace:kube-system,Attempt:0,}" Jan 23 23:34:16.219776 containerd[1972]: time="2026-01-23T23:34:16.219543381Z" level=info msg="connecting to shim 3ef70c3b2074b0130c91f27a6ff7ae30c8a6c19bf5021a1a0f7ad431aca346f3" address="unix:///run/containerd/s/40746b9f8eaace49b09e354a8171bc441a4e408a410e66c454af6ee64f6c72a5" namespace=k8s.io protocol=ttrpc version=3 Jan 23 23:34:16.277753 systemd[1]: Started cri-containerd-3ef70c3b2074b0130c91f27a6ff7ae30c8a6c19bf5021a1a0f7ad431aca346f3.scope - libcontainer container 3ef70c3b2074b0130c91f27a6ff7ae30c8a6c19bf5021a1a0f7ad431aca346f3. Jan 23 23:34:16.321000 audit: BPF prog-id=136 op=LOAD Jan 23 23:34:16.323303 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 23 23:34:16.323495 kernel: audit: type=1334 audit(1769211256.321:439): prog-id=136 op=LOAD Jan 23 23:34:16.325000 audit: BPF prog-id=137 op=LOAD Jan 23 23:34:16.329479 kernel: audit: type=1334 audit(1769211256.325:440): prog-id=137 op=LOAD Jan 23 23:34:16.325000 audit[3485]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3472 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:16.336930 kernel: audit: type=1300 audit(1769211256.325:440): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3472 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:16.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365663730633362323037346230313330633931663237613666663761 Jan 23 23:34:16.345502 kernel: audit: type=1327 audit(1769211256.325:440): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365663730633362323037346230313330633931663237613666663761 Jan 23 23:34:16.325000 audit: BPF prog-id=137 op=UNLOAD Jan 23 23:34:16.348952 kernel: audit: type=1334 audit(1769211256.325:441): prog-id=137 op=UNLOAD Jan 23 23:34:16.325000 audit[3485]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3472 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:16.355557 kernel: audit: type=1300 audit(1769211256.325:441): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3472 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:16.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365663730633362323037346230313330633931663237613666663761 Jan 23 23:34:16.362017 kernel: audit: type=1327 audit(1769211256.325:441): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365663730633362323037346230313330633931663237613666663761 Jan 23 23:34:16.325000 audit: BPF prog-id=138 op=LOAD Jan 23 23:34:16.364163 kernel: audit: type=1334 audit(1769211256.325:442): prog-id=138 op=LOAD Jan 23 23:34:16.325000 audit[3485]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3472 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:16.370799 kernel: audit: type=1300 audit(1769211256.325:442): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3472 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:16.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365663730633362323037346230313330633931663237613666663761 Jan 23 23:34:16.378057 kernel: audit: type=1327 audit(1769211256.325:442): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365663730633362323037346230313330633931663237613666663761 Jan 23 23:34:16.325000 audit: BPF prog-id=139 op=LOAD Jan 23 23:34:16.325000 audit[3485]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3472 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:16.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365663730633362323037346230313330633931663237613666663761 Jan 23 23:34:16.325000 audit: BPF prog-id=139 op=UNLOAD Jan 23 23:34:16.325000 audit[3485]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3472 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:16.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365663730633362323037346230313330633931663237613666663761 Jan 23 23:34:16.325000 audit: BPF prog-id=138 op=UNLOAD Jan 23 23:34:16.325000 audit[3485]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3472 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:16.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365663730633362323037346230313330633931663237613666663761 Jan 23 23:34:16.326000 audit: BPF prog-id=140 op=LOAD Jan 23 23:34:16.326000 audit[3485]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3472 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:16.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365663730633362323037346230313330633931663237613666663761 Jan 23 23:34:16.424684 systemd[1]: Created slice kubepods-besteffort-pod65b5ad27_ba61_446a_84fa_fb0ab3567adf.slice - libcontainer container kubepods-besteffort-pod65b5ad27_ba61_446a_84fa_fb0ab3567adf.slice. Jan 23 23:34:16.445268 containerd[1972]: time="2026-01-23T23:34:16.445061062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qhf2d,Uid:35b8dc61-0882-4f04-b2fa-0faa3e66eb0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ef70c3b2074b0130c91f27a6ff7ae30c8a6c19bf5021a1a0f7ad431aca346f3\"" Jan 23 23:34:16.455707 containerd[1972]: time="2026-01-23T23:34:16.455476162Z" level=info msg="CreateContainer within sandbox \"3ef70c3b2074b0130c91f27a6ff7ae30c8a6c19bf5021a1a0f7ad431aca346f3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 23:34:16.475227 containerd[1972]: time="2026-01-23T23:34:16.475157782Z" level=info msg="Container edd3dd46517b2ad91e5f85372f0db2efd44b5501d01b6fe1cec50ae896cae4e9: CDI devices from CRI Config.CDIDevices: []" Jan 23 23:34:16.480651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3128842589.mount: Deactivated successfully. Jan 23 23:34:16.492433 containerd[1972]: time="2026-01-23T23:34:16.492349174Z" level=info msg="CreateContainer within sandbox \"3ef70c3b2074b0130c91f27a6ff7ae30c8a6c19bf5021a1a0f7ad431aca346f3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"edd3dd46517b2ad91e5f85372f0db2efd44b5501d01b6fe1cec50ae896cae4e9\"" Jan 23 23:34:16.493610 containerd[1972]: time="2026-01-23T23:34:16.493382098Z" level=info msg="StartContainer for \"edd3dd46517b2ad91e5f85372f0db2efd44b5501d01b6fe1cec50ae896cae4e9\"" Jan 23 23:34:16.497502 containerd[1972]: time="2026-01-23T23:34:16.497434258Z" level=info msg="connecting to shim edd3dd46517b2ad91e5f85372f0db2efd44b5501d01b6fe1cec50ae896cae4e9" address="unix:///run/containerd/s/40746b9f8eaace49b09e354a8171bc441a4e408a410e66c454af6ee64f6c72a5" protocol=ttrpc version=3 Jan 23 23:34:16.504195 kubelet[3413]: I0123 23:34:16.504099 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2fct\" (UniqueName: \"kubernetes.io/projected/65b5ad27-ba61-446a-84fa-fb0ab3567adf-kube-api-access-x2fct\") pod \"tigera-operator-7dcd859c48-xwcnw\" (UID: \"65b5ad27-ba61-446a-84fa-fb0ab3567adf\") " pod="tigera-operator/tigera-operator-7dcd859c48-xwcnw" Jan 23 23:34:16.504195 kubelet[3413]: I0123 23:34:16.504179 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/65b5ad27-ba61-446a-84fa-fb0ab3567adf-var-lib-calico\") pod \"tigera-operator-7dcd859c48-xwcnw\" (UID: \"65b5ad27-ba61-446a-84fa-fb0ab3567adf\") " pod="tigera-operator/tigera-operator-7dcd859c48-xwcnw" Jan 23 23:34:16.537255 systemd[1]: Started cri-containerd-edd3dd46517b2ad91e5f85372f0db2efd44b5501d01b6fe1cec50ae896cae4e9.scope - libcontainer container edd3dd46517b2ad91e5f85372f0db2efd44b5501d01b6fe1cec50ae896cae4e9. Jan 23 23:34:16.626000 audit: BPF prog-id=141 op=LOAD Jan 23 23:34:16.626000 audit[3511]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3472 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:16.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564643364643436353137623261643931653566383533373266306462 Jan 23 23:34:16.626000 audit: BPF prog-id=142 op=LOAD Jan 23 23:34:16.626000 audit[3511]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3472 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:16.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564643364643436353137623261643931653566383533373266306462 Jan 23 23:34:16.627000 audit: BPF prog-id=142 op=UNLOAD Jan 23 23:34:16.627000 audit[3511]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3472 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:16.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564643364643436353137623261643931653566383533373266306462 Jan 23 23:34:16.628000 audit: BPF prog-id=141 op=UNLOAD Jan 23 23:34:16.628000 audit[3511]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3472 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:16.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564643364643436353137623261643931653566383533373266306462 Jan 23 23:34:16.628000 audit: BPF prog-id=143 op=LOAD Jan 23 23:34:16.628000 audit[3511]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3472 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:16.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564643364643436353137623261643931653566383533373266306462 Jan 23 23:34:16.667250 containerd[1972]: time="2026-01-23T23:34:16.667203251Z" level=info msg="StartContainer for \"edd3dd46517b2ad91e5f85372f0db2efd44b5501d01b6fe1cec50ae896cae4e9\" returns successfully" Jan 23 23:34:16.737257 containerd[1972]: time="2026-01-23T23:34:16.737169551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-xwcnw,Uid:65b5ad27-ba61-446a-84fa-fb0ab3567adf,Namespace:tigera-operator,Attempt:0,}" Jan 23 23:34:16.780979 containerd[1972]: time="2026-01-23T23:34:16.780268259Z" level=info msg="connecting to shim e1f5cf6d4793a5f80b5134fcf39578ccfd8baa32ddc53e011af851ee3e7eb28a" address="unix:///run/containerd/s/0586290d40333251d6b24cb7e74b077f02f4956b616ea5d90241195b4d28c484" namespace=k8s.io protocol=ttrpc version=3 Jan 23 23:34:16.807644 kubelet[3413]: I0123 23:34:16.807288 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qhf2d" podStartSLOduration=1.807265295 podStartE2EDuration="1.807265295s" podCreationTimestamp="2026-01-23 23:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:34:16.804145883 +0000 UTC m=+6.464107665" watchObservedRunningTime="2026-01-23 23:34:16.807265295 +0000 UTC m=+6.467227077" Jan 23 23:34:16.853571 systemd[1]: Started cri-containerd-e1f5cf6d4793a5f80b5134fcf39578ccfd8baa32ddc53e011af851ee3e7eb28a.scope - libcontainer container e1f5cf6d4793a5f80b5134fcf39578ccfd8baa32ddc53e011af851ee3e7eb28a. Jan 23 23:34:16.884000 audit: BPF prog-id=144 op=LOAD Jan 23 23:34:16.885000 audit: BPF prog-id=145 op=LOAD Jan 23 23:34:16.885000 audit[3563]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3551 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:16.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531663563663664343739336135663830623531333466636633393537 Jan 23 23:34:16.885000 audit: BPF prog-id=145 op=UNLOAD Jan 23 23:34:16.885000 audit[3563]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3551 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:16.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531663563663664343739336135663830623531333466636633393537 Jan 23 23:34:16.886000 audit: BPF prog-id=146 op=LOAD Jan 23 23:34:16.886000 audit[3563]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3551 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:16.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531663563663664343739336135663830623531333466636633393537 Jan 23 23:34:16.887000 audit: BPF prog-id=147 op=LOAD Jan 23 23:34:16.887000 audit[3563]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3551 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:16.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531663563663664343739336135663830623531333466636633393537 Jan 23 23:34:16.887000 audit: BPF prog-id=147 op=UNLOAD Jan 23 23:34:16.887000 audit[3563]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3551 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:16.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531663563663664343739336135663830623531333466636633393537 Jan 23 23:34:16.887000 audit: BPF prog-id=146 op=UNLOAD Jan 23 23:34:16.887000 audit[3563]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3551 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:16.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531663563663664343739336135663830623531333466636633393537 Jan 23 23:34:16.887000 audit: BPF prog-id=148 op=LOAD Jan 23 23:34:16.887000 audit[3563]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3551 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:16.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531663563663664343739336135663830623531333466636633393537 Jan 23 23:34:16.960937 containerd[1972]: time="2026-01-23T23:34:16.960839124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-xwcnw,Uid:65b5ad27-ba61-446a-84fa-fb0ab3567adf,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e1f5cf6d4793a5f80b5134fcf39578ccfd8baa32ddc53e011af851ee3e7eb28a\"" Jan 23 23:34:16.968481 containerd[1972]: time="2026-01-23T23:34:16.968375616Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 23:34:17.000000 audit[3624]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3624 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.000000 audit[3624]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe2801480 a2=0 a3=1 items=0 ppid=3525 pid=3624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.000000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 23 23:34:17.010000 audit[3626]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3626 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.010000 audit[3626]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe1c3f650 a2=0 a3=1 items=0 ppid=3525 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.010000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 23 23:34:17.013000 audit[3629]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3629 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.013000 audit[3629]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdc0ff750 a2=0 a3=1 items=0 ppid=3525 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.013000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 23 23:34:17.029000 audit[3630]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=3630 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:17.029000 audit[3630]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd6cb4d90 a2=0 a3=1 items=0 ppid=3525 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.029000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 23 23:34:17.059000 audit[3631]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3631 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:17.059000 audit[3631]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef41cbb0 a2=0 a3=1 items=0 ppid=3525 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.059000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 23 23:34:17.064000 audit[3632]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3632 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:17.064000 audit[3632]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeb9f7af0 a2=0 a3=1 items=0 ppid=3525 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.064000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 23 23:34:17.115000 audit[3633]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3633 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:17.115000 audit[3633]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe5b12500 a2=0 a3=1 items=0 ppid=3525 pid=3633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.115000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 23 23:34:17.121000 audit[3635]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3635 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:17.121000 audit[3635]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc278ef60 a2=0 a3=1 items=0 ppid=3525 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.121000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 23 23:34:17.129000 audit[3638]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3638 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:17.129000 audit[3638]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffffb75a60 a2=0 a3=1 items=0 ppid=3525 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.129000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 23 23:34:17.132000 audit[3639]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3639 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:17.132000 audit[3639]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc3318340 a2=0 a3=1 items=0 ppid=3525 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.132000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 23 23:34:17.137000 audit[3641]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3641 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:17.137000 audit[3641]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe36020e0 a2=0 a3=1 items=0 ppid=3525 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.137000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 23 23:34:17.140000 audit[3642]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3642 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:17.140000 audit[3642]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe596cf50 a2=0 a3=1 items=0 ppid=3525 pid=3642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.140000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 23 23:34:17.145000 audit[3644]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3644 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:17.145000 audit[3644]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdece6af0 a2=0 a3=1 items=0 ppid=3525 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.145000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 23 23:34:17.155000 audit[3647]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3647 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:17.155000 audit[3647]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc848cac0 a2=0 a3=1 items=0 ppid=3525 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.155000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 23 23:34:17.158000 audit[3648]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3648 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:17.158000 audit[3648]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffce06a800 a2=0 a3=1 items=0 ppid=3525 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.158000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 23 23:34:17.164000 audit[3650]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3650 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:17.164000 audit[3650]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffeeb51ff0 a2=0 a3=1 items=0 ppid=3525 pid=3650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.164000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 23 23:34:17.166000 audit[3651]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3651 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:17.166000 audit[3651]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff2987df0 a2=0 a3=1 items=0 ppid=3525 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.166000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 23 23:34:17.172000 audit[3653]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3653 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:17.172000 audit[3653]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffeb762030 a2=0 a3=1 items=0 ppid=3525 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.172000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 23 23:34:17.180000 audit[3656]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3656 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:17.180000 audit[3656]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc253b610 a2=0 a3=1 items=0 ppid=3525 pid=3656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.180000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 23 23:34:17.187000 audit[3659]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3659 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:17.187000 audit[3659]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc70068b0 a2=0 a3=1 items=0 ppid=3525 pid=3659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.187000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 23 23:34:17.189000 audit[3660]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3660 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:17.189000 audit[3660]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff593f460 a2=0 a3=1 items=0 ppid=3525 pid=3660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.189000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 23 23:34:17.195000 audit[3662]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3662 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:17.195000 audit[3662]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffcded0ac0 a2=0 a3=1 items=0 ppid=3525 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.195000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 23:34:17.202000 audit[3665]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3665 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:17.202000 audit[3665]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd75b2ba0 a2=0 a3=1 items=0 ppid=3525 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.202000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 23:34:17.205000 audit[3666]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3666 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:17.205000 audit[3666]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc41c8800 a2=0 a3=1 items=0 ppid=3525 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.205000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 23 23:34:17.214000 audit[3668]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3668 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 23:34:17.214000 audit[3668]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffcb070de0 a2=0 a3=1 items=0 ppid=3525 pid=3668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.214000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 23 23:34:17.290000 audit[3674]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3674 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:34:17.290000 audit[3674]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd2c46760 a2=0 a3=1 items=0 ppid=3525 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.290000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:34:17.298000 audit[3674]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3674 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:34:17.298000 audit[3674]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffd2c46760 a2=0 a3=1 items=0 ppid=3525 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.298000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:34:17.301000 audit[3679]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3679 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.301000 audit[3679]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe1035740 a2=0 a3=1 items=0 ppid=3525 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.301000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 23 23:34:17.307000 audit[3681]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3681 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.307000 audit[3681]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff53ef760 a2=0 a3=1 items=0 ppid=3525 pid=3681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.307000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 23 23:34:17.316000 audit[3684]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3684 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.316000 audit[3684]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff8d87670 a2=0 a3=1 items=0 ppid=3525 pid=3684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.316000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 23 23:34:17.319000 audit[3685]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3685 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.319000 audit[3685]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcb20a2f0 a2=0 a3=1 items=0 ppid=3525 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.319000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 23 23:34:17.324000 audit[3687]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3687 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.324000 audit[3687]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffe6a45f0 a2=0 a3=1 items=0 ppid=3525 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.324000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 23 23:34:17.327000 audit[3688]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3688 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.327000 audit[3688]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff54ed7e0 a2=0 a3=1 items=0 ppid=3525 pid=3688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.327000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 23 23:34:17.332000 audit[3690]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3690 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.332000 audit[3690]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff8d8e9c0 a2=0 a3=1 items=0 ppid=3525 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.332000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 23 23:34:17.340000 audit[3693]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3693 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.340000 audit[3693]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe85810a0 a2=0 a3=1 items=0 ppid=3525 pid=3693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.340000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 23 23:34:17.342000 audit[3694]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3694 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.342000 audit[3694]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd2bffec0 a2=0 a3=1 items=0 ppid=3525 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.342000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 23 23:34:17.348000 audit[3696]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3696 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.348000 audit[3696]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd3a94f20 a2=0 a3=1 items=0 ppid=3525 pid=3696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.348000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 23 23:34:17.350000 audit[3697]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3697 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.350000 audit[3697]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffed42ac10 a2=0 a3=1 items=0 ppid=3525 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.350000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 23 23:34:17.356000 audit[3699]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3699 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.356000 audit[3699]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffda15b640 a2=0 a3=1 items=0 ppid=3525 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.356000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 23 23:34:17.366000 audit[3702]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3702 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.366000 audit[3702]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffae07fe0 a2=0 a3=1 items=0 ppid=3525 pid=3702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.366000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 23 23:34:17.376000 audit[3705]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3705 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.376000 audit[3705]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcdb36d30 a2=0 a3=1 items=0 ppid=3525 pid=3705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.376000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 23 23:34:17.379000 audit[3706]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3706 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.379000 audit[3706]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd3c00280 a2=0 a3=1 items=0 ppid=3525 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.379000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 23 23:34:17.384000 audit[3708]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3708 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.384000 audit[3708]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff0444910 a2=0 a3=1 items=0 ppid=3525 pid=3708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.384000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 23:34:17.392000 audit[3711]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3711 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.392000 audit[3711]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcb06cf90 a2=0 a3=1 items=0 ppid=3525 pid=3711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.392000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 23:34:17.395000 audit[3712]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3712 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.395000 audit[3712]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd09672b0 a2=0 a3=1 items=0 ppid=3525 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.395000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 23 23:34:17.401000 audit[3714]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3714 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.401000 audit[3714]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc0805b20 a2=0 a3=1 items=0 ppid=3525 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.401000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 23 23:34:17.403000 audit[3715]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3715 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.403000 audit[3715]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc990da30 a2=0 a3=1 items=0 ppid=3525 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.403000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 23 23:34:17.409000 audit[3717]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3717 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.409000 audit[3717]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff500b0d0 a2=0 a3=1 items=0 ppid=3525 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.409000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 23:34:17.417000 audit[3720]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3720 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 23:34:17.417000 audit[3720]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffdb0933d0 a2=0 a3=1 items=0 ppid=3525 pid=3720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.417000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 23:34:17.424000 audit[3722]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3722 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 23 23:34:17.424000 audit[3722]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffc13c6d70 a2=0 a3=1 items=0 ppid=3525 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.424000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:34:17.425000 audit[3722]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3722 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 23 23:34:17.425000 audit[3722]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffc13c6d70 a2=0 a3=1 items=0 ppid=3525 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:17.425000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:34:18.312165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2407085298.mount: Deactivated successfully. Jan 23 23:34:19.053793 containerd[1972]: time="2026-01-23T23:34:19.053714459Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:34:19.056792 containerd[1972]: time="2026-01-23T23:34:19.056355251Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Jan 23 23:34:19.059825 containerd[1972]: time="2026-01-23T23:34:19.059761571Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:34:19.064233 containerd[1972]: time="2026-01-23T23:34:19.064169783Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:34:19.065622 containerd[1972]: time="2026-01-23T23:34:19.065574635Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.097127811s" Jan 23 23:34:19.065759 containerd[1972]: time="2026-01-23T23:34:19.065731355Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 23 23:34:19.073661 containerd[1972]: time="2026-01-23T23:34:19.073604075Z" level=info msg="CreateContainer within sandbox \"e1f5cf6d4793a5f80b5134fcf39578ccfd8baa32ddc53e011af851ee3e7eb28a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 23:34:19.088661 containerd[1972]: time="2026-01-23T23:34:19.087754283Z" level=info msg="Container 7a9d1b22a61363d2bb2dca29d6bfefd465253e9bfbadd4a8c8c20fdfc09bdb8a: CDI devices from CRI Config.CDIDevices: []" Jan 23 23:34:19.103641 containerd[1972]: time="2026-01-23T23:34:19.103566875Z" level=info msg="CreateContainer within sandbox \"e1f5cf6d4793a5f80b5134fcf39578ccfd8baa32ddc53e011af851ee3e7eb28a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7a9d1b22a61363d2bb2dca29d6bfefd465253e9bfbadd4a8c8c20fdfc09bdb8a\"" Jan 23 23:34:19.105062 containerd[1972]: time="2026-01-23T23:34:19.104571431Z" level=info msg="StartContainer for \"7a9d1b22a61363d2bb2dca29d6bfefd465253e9bfbadd4a8c8c20fdfc09bdb8a\"" Jan 23 23:34:19.107321 containerd[1972]: time="2026-01-23T23:34:19.107269811Z" level=info msg="connecting to shim 7a9d1b22a61363d2bb2dca29d6bfefd465253e9bfbadd4a8c8c20fdfc09bdb8a" address="unix:///run/containerd/s/0586290d40333251d6b24cb7e74b077f02f4956b616ea5d90241195b4d28c484" protocol=ttrpc version=3 Jan 23 23:34:19.146348 systemd[1]: Started cri-containerd-7a9d1b22a61363d2bb2dca29d6bfefd465253e9bfbadd4a8c8c20fdfc09bdb8a.scope - libcontainer container 7a9d1b22a61363d2bb2dca29d6bfefd465253e9bfbadd4a8c8c20fdfc09bdb8a. Jan 23 23:34:19.169000 audit: BPF prog-id=149 op=LOAD Jan 23 23:34:19.170000 audit: BPF prog-id=150 op=LOAD Jan 23 23:34:19.170000 audit[3731]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3551 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:19.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761396431623232613631333633643262623264636132396436626665 Jan 23 23:34:19.170000 audit: BPF prog-id=150 op=UNLOAD Jan 23 23:34:19.170000 audit[3731]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3551 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:19.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761396431623232613631333633643262623264636132396436626665 Jan 23 23:34:19.171000 audit: BPF prog-id=151 op=LOAD Jan 23 23:34:19.171000 audit[3731]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3551 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:19.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761396431623232613631333633643262623264636132396436626665 Jan 23 23:34:19.171000 audit: BPF prog-id=152 op=LOAD Jan 23 23:34:19.171000 audit[3731]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3551 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:19.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761396431623232613631333633643262623264636132396436626665 Jan 23 23:34:19.171000 audit: BPF prog-id=152 op=UNLOAD Jan 23 23:34:19.171000 audit[3731]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3551 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:19.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761396431623232613631333633643262623264636132396436626665 Jan 23 23:34:19.172000 audit: BPF prog-id=151 op=UNLOAD Jan 23 23:34:19.172000 audit[3731]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3551 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:19.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761396431623232613631333633643262623264636132396436626665 Jan 23 23:34:19.172000 audit: BPF prog-id=153 op=LOAD Jan 23 23:34:19.172000 audit[3731]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3551 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:19.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761396431623232613631333633643262623264636132396436626665 Jan 23 23:34:19.205976 containerd[1972]: time="2026-01-23T23:34:19.205872515Z" level=info msg="StartContainer for \"7a9d1b22a61363d2bb2dca29d6bfefd465253e9bfbadd4a8c8c20fdfc09bdb8a\" returns successfully" Jan 23 23:34:19.853058 kubelet[3413]: I0123 23:34:19.852733 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-xwcnw" podStartSLOduration=1.750424164 podStartE2EDuration="3.852710055s" podCreationTimestamp="2026-01-23 23:34:16 +0000 UTC" firstStartedPulling="2026-01-23 23:34:16.964840176 +0000 UTC m=+6.624801946" lastFinishedPulling="2026-01-23 23:34:19.067126055 +0000 UTC m=+8.727087837" observedRunningTime="2026-01-23 23:34:19.823165634 +0000 UTC m=+9.483127416" watchObservedRunningTime="2026-01-23 23:34:19.852710055 +0000 UTC m=+9.512671849" Jan 23 23:34:28.240000 audit[2336]: USER_END pid=2336 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 23:34:28.243205 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 23 23:34:28.243286 kernel: audit: type=1106 audit(1769211268.240:519): pid=2336 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 23:34:28.241486 sudo[2336]: pam_unix(sudo:session): session closed for user root Jan 23 23:34:28.240000 audit[2336]: CRED_DISP pid=2336 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 23:34:28.259598 kernel: audit: type=1104 audit(1769211268.240:520): pid=2336 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 23:34:28.326010 sshd[2335]: Connection closed by 20.161.92.111 port 55572 Jan 23 23:34:28.326838 sshd-session[2331]: pam_unix(sshd:session): session closed for user core Jan 23 23:34:28.330000 audit[2331]: USER_END pid=2331 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:34:28.341491 systemd[1]: sshd@6-172.31.23.100:22-20.161.92.111:55572.service: Deactivated successfully. Jan 23 23:34:28.330000 audit[2331]: CRED_DISP pid=2331 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:34:28.354948 kernel: audit: type=1106 audit(1769211268.330:521): pid=2331 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:34:28.355032 kernel: audit: type=1104 audit(1769211268.330:522): pid=2331 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:34:28.355460 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 23:34:28.355899 systemd[1]: session-8.scope: Consumed 12.094s CPU time, 224.4M memory peak. Jan 23 23:34:28.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.23.100:22-20.161.92.111:55572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:34:28.370676 kernel: audit: type=1131 audit(1769211268.341:523): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.23.100:22-20.161.92.111:55572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:34:28.374835 systemd-logind[1934]: Session 8 logged out. Waiting for processes to exit. Jan 23 23:34:28.376784 systemd-logind[1934]: Removed session 8. Jan 23 23:34:31.020000 audit[3809]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3809 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:34:31.020000 audit[3809]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffea02b110 a2=0 a3=1 items=0 ppid=3525 pid=3809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:31.035970 kernel: audit: type=1325 audit(1769211271.020:524): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3809 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:34:31.036053 kernel: audit: type=1300 audit(1769211271.020:524): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffea02b110 a2=0 a3=1 items=0 ppid=3525 pid=3809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:31.020000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:34:31.045511 kernel: audit: type=1327 audit(1769211271.020:524): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:34:31.028000 audit[3809]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3809 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:34:31.052692 kernel: audit: type=1325 audit(1769211271.028:525): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3809 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:34:31.028000 audit[3809]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffea02b110 a2=0 a3=1 items=0 ppid=3525 pid=3809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:31.061948 kernel: audit: type=1300 audit(1769211271.028:525): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffea02b110 a2=0 a3=1 items=0 ppid=3525 pid=3809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:31.028000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:34:31.108000 audit[3811]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3811 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:34:31.108000 audit[3811]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffcbde08f0 a2=0 a3=1 items=0 ppid=3525 pid=3811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:31.108000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:34:31.114000 audit[3811]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3811 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:34:31.114000 audit[3811]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcbde08f0 a2=0 a3=1 items=0 ppid=3525 pid=3811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:31.114000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:34:40.160000 audit[3813]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3813 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:34:40.162741 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 23 23:34:40.162829 kernel: audit: type=1325 audit(1769211280.160:528): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3813 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:34:40.160000 audit[3813]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffe5e1c00 a2=0 a3=1 items=0 ppid=3525 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:40.175954 kernel: audit: type=1300 audit(1769211280.160:528): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffe5e1c00 a2=0 a3=1 items=0 ppid=3525 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:40.160000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:34:40.181544 kernel: audit: type=1327 audit(1769211280.160:528): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:34:40.183000 audit[3813]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3813 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:34:40.183000 audit[3813]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffe5e1c00 a2=0 a3=1 items=0 ppid=3525 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:40.197654 kernel: audit: type=1325 audit(1769211280.183:529): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3813 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:34:40.197783 kernel: audit: type=1300 audit(1769211280.183:529): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffe5e1c00 a2=0 a3=1 items=0 ppid=3525 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:40.183000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:34:40.201171 kernel: audit: type=1327 audit(1769211280.183:529): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:34:40.261000 audit[3815]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3815 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:34:40.266972 kernel: audit: type=1325 audit(1769211280.261:530): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3815 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:34:40.261000 audit[3815]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff5b0b570 a2=0 a3=1 items=0 ppid=3525 pid=3815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:40.261000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:34:40.278525 kernel: audit: type=1300 audit(1769211280.261:530): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff5b0b570 a2=0 a3=1 items=0 ppid=3525 pid=3815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:40.278838 kernel: audit: type=1327 audit(1769211280.261:530): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:34:40.280037 kernel: audit: type=1325 audit(1769211280.278:531): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3815 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:34:40.278000 audit[3815]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3815 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:34:40.278000 audit[3815]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff5b0b570 a2=0 a3=1 items=0 ppid=3525 pid=3815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:40.278000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:34:41.322000 audit[3818]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3818 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:34:41.322000 audit[3818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffdf7ab980 a2=0 a3=1 items=0 ppid=3525 pid=3818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:41.322000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:34:41.330000 audit[3818]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3818 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:34:41.330000 audit[3818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdf7ab980 a2=0 a3=1 items=0 ppid=3525 pid=3818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:41.330000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:34:44.089175 systemd[1]: Created slice kubepods-besteffort-podbed078f2_a39c_45c5_836e_a0e3e7397aab.slice - libcontainer container kubepods-besteffort-podbed078f2_a39c_45c5_836e_a0e3e7397aab.slice. Jan 23 23:34:44.093125 kubelet[3413]: I0123 23:34:44.092446 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-455s4\" (UniqueName: \"kubernetes.io/projected/bed078f2-a39c-45c5-836e-a0e3e7397aab-kube-api-access-455s4\") pod \"calico-typha-7ff5b8cff6-pg9mk\" (UID: \"bed078f2-a39c-45c5-836e-a0e3e7397aab\") " pod="calico-system/calico-typha-7ff5b8cff6-pg9mk" Jan 23 23:34:44.093125 kubelet[3413]: I0123 23:34:44.092509 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bed078f2-a39c-45c5-836e-a0e3e7397aab-tigera-ca-bundle\") pod \"calico-typha-7ff5b8cff6-pg9mk\" (UID: \"bed078f2-a39c-45c5-836e-a0e3e7397aab\") " pod="calico-system/calico-typha-7ff5b8cff6-pg9mk" Jan 23 23:34:44.093125 kubelet[3413]: I0123 23:34:44.092549 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bed078f2-a39c-45c5-836e-a0e3e7397aab-typha-certs\") pod \"calico-typha-7ff5b8cff6-pg9mk\" (UID: \"bed078f2-a39c-45c5-836e-a0e3e7397aab\") " pod="calico-system/calico-typha-7ff5b8cff6-pg9mk" Jan 23 23:34:44.158000 audit[3821]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3821 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:34:44.158000 audit[3821]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffdd823840 a2=0 a3=1 items=0 ppid=3525 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:44.158000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:34:44.163000 audit[3821]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3821 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:34:44.163000 audit[3821]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdd823840 a2=0 a3=1 items=0 ppid=3525 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:44.163000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:34:44.194000 audit[3823]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3823 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:34:44.194000 audit[3823]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe7c1d090 a2=0 a3=1 items=0 ppid=3525 pid=3823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:44.194000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:34:44.211000 audit[3823]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3823 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:34:44.211000 audit[3823]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe7c1d090 a2=0 a3=1 items=0 ppid=3525 pid=3823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:44.211000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:34:44.347835 systemd[1]: Created slice kubepods-besteffort-pod93de163a_56a8_4e66_9112_c9d35b2f9dfe.slice - libcontainer container kubepods-besteffort-pod93de163a_56a8_4e66_9112_c9d35b2f9dfe.slice. Jan 23 23:34:44.394802 kubelet[3413]: I0123 23:34:44.394671 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/93de163a-56a8-4e66-9112-c9d35b2f9dfe-cni-bin-dir\") pod \"calico-node-8q2pl\" (UID: \"93de163a-56a8-4e66-9112-c9d35b2f9dfe\") " pod="calico-system/calico-node-8q2pl" Jan 23 23:34:44.395047 kubelet[3413]: I0123 23:34:44.394856 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/93de163a-56a8-4e66-9112-c9d35b2f9dfe-var-lib-calico\") pod \"calico-node-8q2pl\" (UID: \"93de163a-56a8-4e66-9112-c9d35b2f9dfe\") " pod="calico-system/calico-node-8q2pl" Jan 23 23:34:44.395218 kubelet[3413]: I0123 23:34:44.395040 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/93de163a-56a8-4e66-9112-c9d35b2f9dfe-cni-log-dir\") pod \"calico-node-8q2pl\" (UID: \"93de163a-56a8-4e66-9112-c9d35b2f9dfe\") " pod="calico-system/calico-node-8q2pl" Jan 23 23:34:44.395218 kubelet[3413]: I0123 23:34:44.395146 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93de163a-56a8-4e66-9112-c9d35b2f9dfe-lib-modules\") pod \"calico-node-8q2pl\" (UID: \"93de163a-56a8-4e66-9112-c9d35b2f9dfe\") " pod="calico-system/calico-node-8q2pl" Jan 23 23:34:44.395895 kubelet[3413]: I0123 23:34:44.395231 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/93de163a-56a8-4e66-9112-c9d35b2f9dfe-cni-net-dir\") pod \"calico-node-8q2pl\" (UID: \"93de163a-56a8-4e66-9112-c9d35b2f9dfe\") " pod="calico-system/calico-node-8q2pl" Jan 23 23:34:44.395895 kubelet[3413]: I0123 23:34:44.395377 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/93de163a-56a8-4e66-9112-c9d35b2f9dfe-flexvol-driver-host\") pod \"calico-node-8q2pl\" (UID: \"93de163a-56a8-4e66-9112-c9d35b2f9dfe\") " pod="calico-system/calico-node-8q2pl" Jan 23 23:34:44.395895 kubelet[3413]: I0123 23:34:44.395471 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93de163a-56a8-4e66-9112-c9d35b2f9dfe-tigera-ca-bundle\") pod \"calico-node-8q2pl\" (UID: \"93de163a-56a8-4e66-9112-c9d35b2f9dfe\") " pod="calico-system/calico-node-8q2pl" Jan 23 23:34:44.395895 kubelet[3413]: I0123 23:34:44.395565 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjkxl\" (UniqueName: \"kubernetes.io/projected/93de163a-56a8-4e66-9112-c9d35b2f9dfe-kube-api-access-wjkxl\") pod \"calico-node-8q2pl\" (UID: \"93de163a-56a8-4e66-9112-c9d35b2f9dfe\") " pod="calico-system/calico-node-8q2pl" Jan 23 23:34:44.395895 kubelet[3413]: I0123 23:34:44.395682 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93de163a-56a8-4e66-9112-c9d35b2f9dfe-xtables-lock\") pod \"calico-node-8q2pl\" (UID: \"93de163a-56a8-4e66-9112-c9d35b2f9dfe\") " pod="calico-system/calico-node-8q2pl" Jan 23 23:34:44.396324 kubelet[3413]: I0123 23:34:44.395846 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/93de163a-56a8-4e66-9112-c9d35b2f9dfe-var-run-calico\") pod \"calico-node-8q2pl\" (UID: \"93de163a-56a8-4e66-9112-c9d35b2f9dfe\") " pod="calico-system/calico-node-8q2pl" Jan 23 23:34:44.396324 kubelet[3413]: I0123 23:34:44.396005 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/93de163a-56a8-4e66-9112-c9d35b2f9dfe-node-certs\") pod \"calico-node-8q2pl\" (UID: \"93de163a-56a8-4e66-9112-c9d35b2f9dfe\") " pod="calico-system/calico-node-8q2pl" Jan 23 23:34:44.396324 kubelet[3413]: I0123 23:34:44.396088 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/93de163a-56a8-4e66-9112-c9d35b2f9dfe-policysync\") pod \"calico-node-8q2pl\" (UID: \"93de163a-56a8-4e66-9112-c9d35b2f9dfe\") " pod="calico-system/calico-node-8q2pl" Jan 23 23:34:44.403777 containerd[1972]: time="2026-01-23T23:34:44.403701108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7ff5b8cff6-pg9mk,Uid:bed078f2-a39c-45c5-836e-a0e3e7397aab,Namespace:calico-system,Attempt:0,}" Jan 23 23:34:44.437056 containerd[1972]: time="2026-01-23T23:34:44.436880113Z" level=info msg="connecting to shim a2a5f2ce9b45b84f5e54d53d8f0f5c434874b083e008808cb5c358f6e7e046ad" address="unix:///run/containerd/s/0181753f4b4b852ff688f4b5c7ab8b6bec7871645b75711ae5b4bd128d55c0b7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 23:34:44.499277 systemd[1]: Started cri-containerd-a2a5f2ce9b45b84f5e54d53d8f0f5c434874b083e008808cb5c358f6e7e046ad.scope - libcontainer container a2a5f2ce9b45b84f5e54d53d8f0f5c434874b083e008808cb5c358f6e7e046ad. Jan 23 23:34:44.513242 kubelet[3413]: E0123 23:34:44.513199 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.513886 kubelet[3413]: W0123 23:34:44.513433 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.513886 kubelet[3413]: E0123 23:34:44.513492 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.516641 kubelet[3413]: E0123 23:34:44.516582 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.516848 kubelet[3413]: W0123 23:34:44.516818 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.516996 kubelet[3413]: E0123 23:34:44.516971 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.518673 kubelet[3413]: E0123 23:34:44.518275 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.518673 kubelet[3413]: W0123 23:34:44.518310 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.518673 kubelet[3413]: E0123 23:34:44.518342 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.524183 kubelet[3413]: E0123 23:34:44.524142 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.524637 kubelet[3413]: W0123 23:34:44.524601 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.525018 kubelet[3413]: E0123 23:34:44.524862 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.534661 kubelet[3413]: E0123 23:34:44.534406 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.534661 kubelet[3413]: W0123 23:34:44.534446 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.534661 kubelet[3413]: E0123 23:34:44.534481 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.535403 kubelet[3413]: E0123 23:34:44.535182 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.535403 kubelet[3413]: W0123 23:34:44.535210 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.535403 kubelet[3413]: E0123 23:34:44.535238 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.535939 kubelet[3413]: E0123 23:34:44.535804 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.535939 kubelet[3413]: W0123 23:34:44.535832 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.535939 kubelet[3413]: E0123 23:34:44.535861 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.537954 kubelet[3413]: E0123 23:34:44.537686 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.537954 kubelet[3413]: W0123 23:34:44.537722 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.537954 kubelet[3413]: E0123 23:34:44.537754 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.540507 kubelet[3413]: E0123 23:34:44.540177 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.540507 kubelet[3413]: W0123 23:34:44.540216 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.540507 kubelet[3413]: E0123 23:34:44.540249 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.541974 kubelet[3413]: E0123 23:34:44.541893 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.542188 kubelet[3413]: W0123 23:34:44.542159 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.542315 kubelet[3413]: E0123 23:34:44.542291 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.543087 kubelet[3413]: E0123 23:34:44.543054 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.543246 kubelet[3413]: W0123 23:34:44.543219 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.543725 kubelet[3413]: E0123 23:34:44.543369 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.546309 kubelet[3413]: E0123 23:34:44.546268 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.546496 kubelet[3413]: W0123 23:34:44.546469 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.546724 kubelet[3413]: E0123 23:34:44.546680 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.548215 kubelet[3413]: E0123 23:34:44.547898 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.548215 kubelet[3413]: W0123 23:34:44.547951 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.548215 kubelet[3413]: E0123 23:34:44.547984 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.548810 kubelet[3413]: E0123 23:34:44.548771 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.549186 kubelet[3413]: W0123 23:34:44.548999 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.549186 kubelet[3413]: E0123 23:34:44.549035 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.554043 kubelet[3413]: E0123 23:34:44.552087 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.554256 kubelet[3413]: W0123 23:34:44.553997 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.554256 kubelet[3413]: E0123 23:34:44.554203 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.558123 kubelet[3413]: E0123 23:34:44.558001 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.558123 kubelet[3413]: W0123 23:34:44.558059 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.558508 kubelet[3413]: E0123 23:34:44.558093 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.561491 kubelet[3413]: E0123 23:34:44.561416 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.561491 kubelet[3413]: W0123 23:34:44.561451 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.565453 kubelet[3413]: E0123 23:34:44.565034 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.567902 kubelet[3413]: E0123 23:34:44.567090 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.567902 kubelet[3413]: W0123 23:34:44.567131 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.567902 kubelet[3413]: E0123 23:34:44.567163 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.567902 kubelet[3413]: E0123 23:34:44.568148 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.567902 kubelet[3413]: W0123 23:34:44.568173 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.567902 kubelet[3413]: E0123 23:34:44.568202 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.572170 kubelet[3413]: E0123 23:34:44.570190 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.572170 kubelet[3413]: W0123 23:34:44.570232 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.572170 kubelet[3413]: E0123 23:34:44.570266 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.572170 kubelet[3413]: E0123 23:34:44.570567 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.572170 kubelet[3413]: W0123 23:34:44.570583 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.572170 kubelet[3413]: E0123 23:34:44.570603 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.573301 kubelet[3413]: E0123 23:34:44.573252 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.573301 kubelet[3413]: W0123 23:34:44.573290 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.573519 kubelet[3413]: E0123 23:34:44.573322 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.573892 kubelet[3413]: E0123 23:34:44.573777 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.573892 kubelet[3413]: W0123 23:34:44.573839 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.573892 kubelet[3413]: E0123 23:34:44.573869 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.574348 kubelet[3413]: E0123 23:34:44.574184 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.574348 kubelet[3413]: W0123 23:34:44.574214 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.574348 kubelet[3413]: E0123 23:34:44.574238 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.576013 kubelet[3413]: E0123 23:34:44.575736 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.576013 kubelet[3413]: W0123 23:34:44.575775 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.576013 kubelet[3413]: E0123 23:34:44.575807 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.581136 kubelet[3413]: E0123 23:34:44.580052 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.581136 kubelet[3413]: W0123 23:34:44.580090 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.581136 kubelet[3413]: E0123 23:34:44.580126 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.601000 audit: BPF prog-id=154 op=LOAD Jan 23 23:34:44.603000 audit: BPF prog-id=155 op=LOAD Jan 23 23:34:44.603000 audit[3845]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3834 pid=3845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:44.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613566326365396234356238346635653534643533643866306635 Jan 23 23:34:44.605000 audit: BPF prog-id=155 op=UNLOAD Jan 23 23:34:44.605000 audit[3845]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3834 pid=3845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:44.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613566326365396234356238346635653534643533643866306635 Jan 23 23:34:44.607000 audit: BPF prog-id=156 op=LOAD Jan 23 23:34:44.607000 audit[3845]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3834 pid=3845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:44.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613566326365396234356238346635653534643533643866306635 Jan 23 23:34:44.611000 audit: BPF prog-id=157 op=LOAD Jan 23 23:34:44.611000 audit[3845]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3834 pid=3845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:44.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613566326365396234356238346635653534643533643866306635 Jan 23 23:34:44.612000 audit: BPF prog-id=157 op=UNLOAD Jan 23 23:34:44.612000 audit[3845]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3834 pid=3845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:44.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613566326365396234356238346635653534643533643866306635 Jan 23 23:34:44.613000 audit: BPF prog-id=156 op=UNLOAD Jan 23 23:34:44.613000 audit[3845]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3834 pid=3845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:44.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613566326365396234356238346635653534643533643866306635 Jan 23 23:34:44.613000 audit: BPF prog-id=158 op=LOAD Jan 23 23:34:44.613000 audit[3845]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3834 pid=3845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:44.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613566326365396234356238346635653534643533643866306635 Jan 23 23:34:44.630359 kubelet[3413]: E0123 23:34:44.630048 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.630567 kubelet[3413]: W0123 23:34:44.630514 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.630961 kubelet[3413]: E0123 23:34:44.630556 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.632128 kubelet[3413]: E0123 23:34:44.631994 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvgjz" podUID="bc435867-361d-4b3f-a3e1-96c440fc0a66" Jan 23 23:34:44.664494 containerd[1972]: time="2026-01-23T23:34:44.664413566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8q2pl,Uid:93de163a-56a8-4e66-9112-c9d35b2f9dfe,Namespace:calico-system,Attempt:0,}" Jan 23 23:34:44.694867 kubelet[3413]: E0123 23:34:44.694821 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.694867 kubelet[3413]: W0123 23:34:44.694859 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.695239 kubelet[3413]: E0123 23:34:44.695021 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.697209 kubelet[3413]: E0123 23:34:44.697106 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.697376 kubelet[3413]: W0123 23:34:44.697201 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.697376 kubelet[3413]: E0123 23:34:44.697278 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.699844 kubelet[3413]: E0123 23:34:44.699454 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.699844 kubelet[3413]: W0123 23:34:44.699494 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.699844 kubelet[3413]: E0123 23:34:44.699558 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.701878 kubelet[3413]: E0123 23:34:44.701633 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.701878 kubelet[3413]: W0123 23:34:44.701682 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.701878 kubelet[3413]: E0123 23:34:44.701718 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.705962 kubelet[3413]: E0123 23:34:44.704895 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.705962 kubelet[3413]: W0123 23:34:44.705090 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.705962 kubelet[3413]: E0123 23:34:44.705269 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.706682 kubelet[3413]: E0123 23:34:44.706630 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.706682 kubelet[3413]: W0123 23:34:44.706671 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.706821 kubelet[3413]: E0123 23:34:44.706703 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.708099 kubelet[3413]: E0123 23:34:44.708047 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.708099 kubelet[3413]: W0123 23:34:44.708086 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.708305 kubelet[3413]: E0123 23:34:44.708119 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.710429 kubelet[3413]: E0123 23:34:44.710157 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.710429 kubelet[3413]: W0123 23:34:44.710196 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.710429 kubelet[3413]: E0123 23:34:44.710229 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.711941 kubelet[3413]: E0123 23:34:44.711832 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.711941 kubelet[3413]: W0123 23:34:44.711876 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.712291 kubelet[3413]: E0123 23:34:44.712005 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.714291 kubelet[3413]: E0123 23:34:44.714201 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.714291 kubelet[3413]: W0123 23:34:44.714240 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.714291 kubelet[3413]: E0123 23:34:44.714272 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.717416 kubelet[3413]: E0123 23:34:44.717361 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.717416 kubelet[3413]: W0123 23:34:44.717402 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.718679 kubelet[3413]: E0123 23:34:44.717436 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.718679 kubelet[3413]: E0123 23:34:44.718657 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.718874 kubelet[3413]: W0123 23:34:44.718685 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.718874 kubelet[3413]: E0123 23:34:44.718718 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.720215 kubelet[3413]: E0123 23:34:44.720164 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.720215 kubelet[3413]: W0123 23:34:44.720202 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.720444 kubelet[3413]: E0123 23:34:44.720239 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.721292 kubelet[3413]: E0123 23:34:44.721211 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.721292 kubelet[3413]: W0123 23:34:44.721252 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.721292 kubelet[3413]: E0123 23:34:44.721284 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.724115 kubelet[3413]: E0123 23:34:44.724063 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.724115 kubelet[3413]: W0123 23:34:44.724103 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.724529 kubelet[3413]: E0123 23:34:44.724138 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.726137 kubelet[3413]: E0123 23:34:44.726061 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.726137 kubelet[3413]: W0123 23:34:44.726098 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.726137 kubelet[3413]: E0123 23:34:44.726129 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.729065 kubelet[3413]: E0123 23:34:44.727462 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.729065 kubelet[3413]: W0123 23:34:44.727507 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.729065 kubelet[3413]: E0123 23:34:44.727540 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.729379 kubelet[3413]: E0123 23:34:44.729278 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.729379 kubelet[3413]: W0123 23:34:44.729304 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.729379 kubelet[3413]: E0123 23:34:44.729352 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.729674 kubelet[3413]: E0123 23:34:44.729634 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.729674 kubelet[3413]: W0123 23:34:44.729661 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.729800 kubelet[3413]: E0123 23:34:44.729684 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.730719 kubelet[3413]: E0123 23:34:44.730671 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.730719 kubelet[3413]: W0123 23:34:44.730707 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.731039 kubelet[3413]: E0123 23:34:44.730737 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.732535 kubelet[3413]: E0123 23:34:44.732488 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.732535 kubelet[3413]: W0123 23:34:44.732526 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.732756 kubelet[3413]: E0123 23:34:44.732561 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.732756 kubelet[3413]: I0123 23:34:44.732617 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc435867-361d-4b3f-a3e1-96c440fc0a66-kubelet-dir\") pod \"csi-node-driver-cvgjz\" (UID: \"bc435867-361d-4b3f-a3e1-96c440fc0a66\") " pod="calico-system/csi-node-driver-cvgjz" Jan 23 23:34:44.736201 kubelet[3413]: E0123 23:34:44.735756 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.736201 kubelet[3413]: W0123 23:34:44.735800 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.736201 kubelet[3413]: E0123 23:34:44.735849 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.736201 kubelet[3413]: I0123 23:34:44.735931 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bc435867-361d-4b3f-a3e1-96c440fc0a66-registration-dir\") pod \"csi-node-driver-cvgjz\" (UID: \"bc435867-361d-4b3f-a3e1-96c440fc0a66\") " pod="calico-system/csi-node-driver-cvgjz" Jan 23 23:34:44.736511 containerd[1972]: time="2026-01-23T23:34:44.736290098Z" level=info msg="connecting to shim 89062fcfd2be05ea07c37b0b2d9f610db1d00cc28253d9135d19f4a835c3b2af" address="unix:///run/containerd/s/7f6ff468b27ad1f0d9dd5bc74be9191d9289b9e8edf738d1a9bbeef05db12cc0" namespace=k8s.io protocol=ttrpc version=3 Jan 23 23:34:44.737582 kubelet[3413]: E0123 23:34:44.737353 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.738117 kubelet[3413]: W0123 23:34:44.737935 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.739173 kubelet[3413]: E0123 23:34:44.738839 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.741927 kubelet[3413]: E0123 23:34:44.741773 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.742467 kubelet[3413]: W0123 23:34:44.742024 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.742467 kubelet[3413]: E0123 23:34:44.742064 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.743288 kubelet[3413]: E0123 23:34:44.743256 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.744441 kubelet[3413]: W0123 23:34:44.743981 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.744441 kubelet[3413]: E0123 23:34:44.744033 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.744441 kubelet[3413]: I0123 23:34:44.744202 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bc435867-361d-4b3f-a3e1-96c440fc0a66-socket-dir\") pod \"csi-node-driver-cvgjz\" (UID: \"bc435867-361d-4b3f-a3e1-96c440fc0a66\") " pod="calico-system/csi-node-driver-cvgjz" Jan 23 23:34:44.745170 kubelet[3413]: E0123 23:34:44.745126 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.745170 kubelet[3413]: W0123 23:34:44.745362 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.745170 kubelet[3413]: E0123 23:34:44.745399 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.747594 kubelet[3413]: E0123 23:34:44.747556 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.748640 kubelet[3413]: W0123 23:34:44.748591 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.749004 kubelet[3413]: E0123 23:34:44.748973 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.750182 kubelet[3413]: E0123 23:34:44.750109 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.750672 kubelet[3413]: W0123 23:34:44.750363 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.750672 kubelet[3413]: E0123 23:34:44.750401 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.750672 kubelet[3413]: I0123 23:34:44.750615 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqqlm\" (UniqueName: \"kubernetes.io/projected/bc435867-361d-4b3f-a3e1-96c440fc0a66-kube-api-access-sqqlm\") pod \"csi-node-driver-cvgjz\" (UID: \"bc435867-361d-4b3f-a3e1-96c440fc0a66\") " pod="calico-system/csi-node-driver-cvgjz" Jan 23 23:34:44.752584 kubelet[3413]: E0123 23:34:44.752201 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.752584 kubelet[3413]: W0123 23:34:44.752238 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.752584 kubelet[3413]: E0123 23:34:44.752270 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.754018 kubelet[3413]: E0123 23:34:44.753830 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.754667 kubelet[3413]: W0123 23:34:44.754446 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.754667 kubelet[3413]: E0123 23:34:44.754523 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.755285 kubelet[3413]: E0123 23:34:44.755208 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.755285 kubelet[3413]: W0123 23:34:44.755246 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.755285 kubelet[3413]: E0123 23:34:44.755278 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.756389 kubelet[3413]: I0123 23:34:44.755801 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bc435867-361d-4b3f-a3e1-96c440fc0a66-varrun\") pod \"csi-node-driver-cvgjz\" (UID: \"bc435867-361d-4b3f-a3e1-96c440fc0a66\") " pod="calico-system/csi-node-driver-cvgjz" Jan 23 23:34:44.756683 kubelet[3413]: E0123 23:34:44.756456 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.756683 kubelet[3413]: W0123 23:34:44.756480 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.756683 kubelet[3413]: E0123 23:34:44.756509 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.757987 kubelet[3413]: E0123 23:34:44.757814 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.757987 kubelet[3413]: W0123 23:34:44.757852 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.757987 kubelet[3413]: E0123 23:34:44.757890 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.759107 kubelet[3413]: E0123 23:34:44.759059 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.759107 kubelet[3413]: W0123 23:34:44.759088 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.759244 kubelet[3413]: E0123 23:34:44.759118 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.759691 kubelet[3413]: E0123 23:34:44.759647 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.759691 kubelet[3413]: W0123 23:34:44.759679 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.759821 kubelet[3413]: E0123 23:34:44.759705 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.790874 containerd[1972]: time="2026-01-23T23:34:44.790757906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7ff5b8cff6-pg9mk,Uid:bed078f2-a39c-45c5-836e-a0e3e7397aab,Namespace:calico-system,Attempt:0,} returns sandbox id \"a2a5f2ce9b45b84f5e54d53d8f0f5c434874b083e008808cb5c358f6e7e046ad\"" Jan 23 23:34:44.798643 containerd[1972]: time="2026-01-23T23:34:44.798583526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 23:34:44.847706 systemd[1]: Started cri-containerd-89062fcfd2be05ea07c37b0b2d9f610db1d00cc28253d9135d19f4a835c3b2af.scope - libcontainer container 89062fcfd2be05ea07c37b0b2d9f610db1d00cc28253d9135d19f4a835c3b2af. Jan 23 23:34:44.858507 kubelet[3413]: E0123 23:34:44.858083 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.858507 kubelet[3413]: W0123 23:34:44.858124 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.858507 kubelet[3413]: E0123 23:34:44.858186 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.860477 kubelet[3413]: E0123 23:34:44.860370 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.860477 kubelet[3413]: W0123 23:34:44.860406 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.860477 kubelet[3413]: E0123 23:34:44.860440 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.862520 kubelet[3413]: E0123 23:34:44.861844 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.862520 kubelet[3413]: W0123 23:34:44.861889 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.863155 kubelet[3413]: E0123 23:34:44.862987 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.868664 kubelet[3413]: E0123 23:34:44.867886 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.868664 kubelet[3413]: W0123 23:34:44.868387 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.868664 kubelet[3413]: E0123 23:34:44.868438 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.869978 kubelet[3413]: E0123 23:34:44.869749 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.870146 kubelet[3413]: W0123 23:34:44.870116 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.870672 kubelet[3413]: E0123 23:34:44.870630 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.874089 kubelet[3413]: E0123 23:34:44.874035 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.874731 kubelet[3413]: W0123 23:34:44.874285 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.874731 kubelet[3413]: E0123 23:34:44.874323 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.876932 kubelet[3413]: E0123 23:34:44.876877 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.877385 kubelet[3413]: W0123 23:34:44.877108 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.877385 kubelet[3413]: E0123 23:34:44.877150 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.877905 kubelet[3413]: E0123 23:34:44.877874 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.878327 kubelet[3413]: W0123 23:34:44.878070 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.878327 kubelet[3413]: E0123 23:34:44.878111 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.878835 kubelet[3413]: E0123 23:34:44.878797 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.879239 kubelet[3413]: W0123 23:34:44.879031 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.879239 kubelet[3413]: E0123 23:34:44.879065 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.880536 kubelet[3413]: E0123 23:34:44.880225 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.880536 kubelet[3413]: W0123 23:34:44.880256 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.880536 kubelet[3413]: E0123 23:34:44.880287 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.881444 kubelet[3413]: E0123 23:34:44.881252 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.881444 kubelet[3413]: W0123 23:34:44.881285 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.881444 kubelet[3413]: E0123 23:34:44.881315 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.882333 kubelet[3413]: E0123 23:34:44.882196 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.882508 kubelet[3413]: W0123 23:34:44.882476 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.882629 kubelet[3413]: E0123 23:34:44.882605 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.883446 kubelet[3413]: E0123 23:34:44.883412 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.883789 kubelet[3413]: W0123 23:34:44.883658 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.883789 kubelet[3413]: E0123 23:34:44.883694 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.884608 kubelet[3413]: E0123 23:34:44.884357 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.884608 kubelet[3413]: W0123 23:34:44.884387 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.884608 kubelet[3413]: E0123 23:34:44.884413 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.886291 kubelet[3413]: E0123 23:34:44.886141 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.886291 kubelet[3413]: W0123 23:34:44.886192 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.886291 kubelet[3413]: E0123 23:34:44.886228 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.887313 kubelet[3413]: E0123 23:34:44.887273 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.888082 kubelet[3413]: W0123 23:34:44.887415 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.888082 kubelet[3413]: E0123 23:34:44.887444 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.889441 kubelet[3413]: E0123 23:34:44.889351 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.889441 kubelet[3413]: W0123 23:34:44.889390 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.890130 kubelet[3413]: E0123 23:34:44.890015 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.891461 kubelet[3413]: E0123 23:34:44.891384 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.891461 kubelet[3413]: W0123 23:34:44.891424 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.891461 kubelet[3413]: E0123 23:34:44.891457 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.893583 kubelet[3413]: E0123 23:34:44.893537 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.893583 kubelet[3413]: W0123 23:34:44.893571 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.894043 kubelet[3413]: E0123 23:34:44.893602 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.896132 kubelet[3413]: E0123 23:34:44.896077 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.896132 kubelet[3413]: W0123 23:34:44.896118 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.896518 kubelet[3413]: E0123 23:34:44.896152 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.899088 kubelet[3413]: E0123 23:34:44.899035 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.899088 kubelet[3413]: W0123 23:34:44.899076 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.899088 kubelet[3413]: E0123 23:34:44.899110 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.900804 kubelet[3413]: E0123 23:34:44.900673 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.900804 kubelet[3413]: W0123 23:34:44.900709 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.900804 kubelet[3413]: E0123 23:34:44.900739 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.903637 kubelet[3413]: E0123 23:34:44.903592 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.903637 kubelet[3413]: W0123 23:34:44.903626 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.904063 kubelet[3413]: E0123 23:34:44.903657 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.906156 kubelet[3413]: E0123 23:34:44.906049 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.906156 kubelet[3413]: W0123 23:34:44.906090 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.906156 kubelet[3413]: E0123 23:34:44.906123 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.907460 kubelet[3413]: E0123 23:34:44.907325 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.907460 kubelet[3413]: W0123 23:34:44.907371 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.907460 kubelet[3413]: E0123 23:34:44.907402 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:44.934000 audit: BPF prog-id=159 op=LOAD Jan 23 23:34:44.935000 audit: BPF prog-id=160 op=LOAD Jan 23 23:34:44.935000 audit[3966]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3931 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:44.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839303632666366643262653035656130376333376230623264396636 Jan 23 23:34:44.936000 audit: BPF prog-id=160 op=UNLOAD Jan 23 23:34:44.936000 audit[3966]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3931 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:44.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839303632666366643262653035656130376333376230623264396636 Jan 23 23:34:44.937000 audit: BPF prog-id=161 op=LOAD Jan 23 23:34:44.937000 audit[3966]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3931 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:44.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839303632666366643262653035656130376333376230623264396636 Jan 23 23:34:44.938000 audit: BPF prog-id=162 op=LOAD Jan 23 23:34:44.938000 audit[3966]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3931 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:44.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839303632666366643262653035656130376333376230623264396636 Jan 23 23:34:44.938000 audit: BPF prog-id=162 op=UNLOAD Jan 23 23:34:44.938000 audit[3966]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3931 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:44.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839303632666366643262653035656130376333376230623264396636 Jan 23 23:34:44.939000 audit: BPF prog-id=161 op=UNLOAD Jan 23 23:34:44.939000 audit[3966]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3931 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:44.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839303632666366643262653035656130376333376230623264396636 Jan 23 23:34:44.939000 audit: BPF prog-id=163 op=LOAD Jan 23 23:34:44.939000 audit[3966]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3931 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:44.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839303632666366643262653035656130376333376230623264396636 Jan 23 23:34:44.993183 kubelet[3413]: E0123 23:34:44.993146 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:44.993183 kubelet[3413]: W0123 23:34:44.993234 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:44.993183 kubelet[3413]: E0123 23:34:44.993267 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:45.011265 containerd[1972]: time="2026-01-23T23:34:45.011039124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8q2pl,Uid:93de163a-56a8-4e66-9112-c9d35b2f9dfe,Namespace:calico-system,Attempt:0,} returns sandbox id \"89062fcfd2be05ea07c37b0b2d9f610db1d00cc28253d9135d19f4a835c3b2af\"" Jan 23 23:34:46.064319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1176897121.mount: Deactivated successfully. Jan 23 23:34:46.605829 kubelet[3413]: E0123 23:34:46.605780 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvgjz" podUID="bc435867-361d-4b3f-a3e1-96c440fc0a66" Jan 23 23:34:47.016464 containerd[1972]: time="2026-01-23T23:34:47.016409557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:34:47.019951 containerd[1972]: time="2026-01-23T23:34:47.019703353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Jan 23 23:34:47.021939 containerd[1972]: time="2026-01-23T23:34:47.021864962Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:34:47.025608 containerd[1972]: time="2026-01-23T23:34:47.025248110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:34:47.028040 containerd[1972]: time="2026-01-23T23:34:47.026413310Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.227768824s" Jan 23 23:34:47.028040 containerd[1972]: time="2026-01-23T23:34:47.026467526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 23 23:34:47.030057 containerd[1972]: time="2026-01-23T23:34:47.029997890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 23:34:47.063964 containerd[1972]: time="2026-01-23T23:34:47.063819002Z" level=info msg="CreateContainer within sandbox \"a2a5f2ce9b45b84f5e54d53d8f0f5c434874b083e008808cb5c358f6e7e046ad\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 23:34:47.081322 containerd[1972]: time="2026-01-23T23:34:47.081238538Z" level=info msg="Container 4c1cda6fdf9dac9aabb359d1551795b25ad9c1c76d298ed6a8e0b959a8e5700c: CDI devices from CRI Config.CDIDevices: []" Jan 23 23:34:47.097636 containerd[1972]: time="2026-01-23T23:34:47.097560338Z" level=info msg="CreateContainer within sandbox \"a2a5f2ce9b45b84f5e54d53d8f0f5c434874b083e008808cb5c358f6e7e046ad\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4c1cda6fdf9dac9aabb359d1551795b25ad9c1c76d298ed6a8e0b959a8e5700c\"" Jan 23 23:34:47.099119 containerd[1972]: time="2026-01-23T23:34:47.099048662Z" level=info msg="StartContainer for \"4c1cda6fdf9dac9aabb359d1551795b25ad9c1c76d298ed6a8e0b959a8e5700c\"" Jan 23 23:34:47.103125 containerd[1972]: time="2026-01-23T23:34:47.103003946Z" level=info msg="connecting to shim 4c1cda6fdf9dac9aabb359d1551795b25ad9c1c76d298ed6a8e0b959a8e5700c" address="unix:///run/containerd/s/0181753f4b4b852ff688f4b5c7ab8b6bec7871645b75711ae5b4bd128d55c0b7" protocol=ttrpc version=3 Jan 23 23:34:47.159307 systemd[1]: Started cri-containerd-4c1cda6fdf9dac9aabb359d1551795b25ad9c1c76d298ed6a8e0b959a8e5700c.scope - libcontainer container 4c1cda6fdf9dac9aabb359d1551795b25ad9c1c76d298ed6a8e0b959a8e5700c. Jan 23 23:34:47.189331 kernel: kauditd_printk_skb: 64 callbacks suppressed Jan 23 23:34:47.189470 kernel: audit: type=1334 audit(1769211287.186:554): prog-id=164 op=LOAD Jan 23 23:34:47.186000 audit: BPF prog-id=164 op=LOAD Jan 23 23:34:47.191000 audit: BPF prog-id=165 op=LOAD Jan 23 23:34:47.191000 audit[4030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3834 pid=4030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:47.201120 kernel: audit: type=1334 audit(1769211287.191:555): prog-id=165 op=LOAD Jan 23 23:34:47.201209 kernel: audit: type=1300 audit(1769211287.191:555): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3834 pid=4030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:47.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463316364613666646639646163396161626233353964313535313739 Jan 23 23:34:47.209572 kernel: audit: type=1327 audit(1769211287.191:555): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463316364613666646639646163396161626233353964313535313739 Jan 23 23:34:47.209716 kernel: audit: type=1334 audit(1769211287.191:556): prog-id=165 op=UNLOAD Jan 23 23:34:47.191000 audit: BPF prog-id=165 op=UNLOAD Jan 23 23:34:47.191000 audit[4030]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3834 pid=4030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:47.217423 kernel: audit: type=1300 audit(1769211287.191:556): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3834 pid=4030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:47.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463316364613666646639646163396161626233353964313535313739 Jan 23 23:34:47.223851 kernel: audit: type=1327 audit(1769211287.191:556): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463316364613666646639646163396161626233353964313535313739 Jan 23 23:34:47.193000 audit: BPF prog-id=166 op=LOAD Jan 23 23:34:47.225862 kernel: audit: type=1334 audit(1769211287.193:557): prog-id=166 op=LOAD Jan 23 23:34:47.193000 audit[4030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3834 pid=4030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:47.232932 kernel: audit: type=1300 audit(1769211287.193:557): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3834 pid=4030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:47.233051 kernel: audit: type=1327 audit(1769211287.193:557): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463316364613666646639646163396161626233353964313535313739 Jan 23 23:34:47.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463316364613666646639646163396161626233353964313535313739 Jan 23 23:34:47.193000 audit: BPF prog-id=167 op=LOAD Jan 23 23:34:47.193000 audit[4030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3834 pid=4030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:47.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463316364613666646639646163396161626233353964313535313739 Jan 23 23:34:47.193000 audit: BPF prog-id=167 op=UNLOAD Jan 23 23:34:47.193000 audit[4030]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3834 pid=4030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:47.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463316364613666646639646163396161626233353964313535313739 Jan 23 23:34:47.193000 audit: BPF prog-id=166 op=UNLOAD Jan 23 23:34:47.193000 audit[4030]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3834 pid=4030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:47.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463316364613666646639646163396161626233353964313535313739 Jan 23 23:34:47.193000 audit: BPF prog-id=168 op=LOAD Jan 23 23:34:47.193000 audit[4030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3834 pid=4030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:47.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463316364613666646639646163396161626233353964313535313739 Jan 23 23:34:47.274406 containerd[1972]: time="2026-01-23T23:34:47.274055871Z" level=info msg="StartContainer for \"4c1cda6fdf9dac9aabb359d1551795b25ad9c1c76d298ed6a8e0b959a8e5700c\" returns successfully" Jan 23 23:34:47.940619 kubelet[3413]: I0123 23:34:47.940162 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7ff5b8cff6-pg9mk" podStartSLOduration=1.70743383 podStartE2EDuration="3.940065342s" podCreationTimestamp="2026-01-23 23:34:44 +0000 UTC" firstStartedPulling="2026-01-23 23:34:44.796865642 +0000 UTC m=+34.456827424" lastFinishedPulling="2026-01-23 23:34:47.029497166 +0000 UTC m=+36.689458936" observedRunningTime="2026-01-23 23:34:47.936601434 +0000 UTC m=+37.596563216" watchObservedRunningTime="2026-01-23 23:34:47.940065342 +0000 UTC m=+37.600027124" Jan 23 23:34:47.958741 kubelet[3413]: E0123 23:34:47.958685 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:47.958741 kubelet[3413]: W0123 23:34:47.958743 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:47.959001 kubelet[3413]: E0123 23:34:47.958776 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:47.959654 kubelet[3413]: E0123 23:34:47.959260 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:47.959654 kubelet[3413]: W0123 23:34:47.959292 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:47.959654 kubelet[3413]: E0123 23:34:47.959342 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:47.959948 kubelet[3413]: E0123 23:34:47.959851 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:47.960043 kubelet[3413]: W0123 23:34:47.959872 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:47.960043 kubelet[3413]: E0123 23:34:47.960017 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:47.960547 kubelet[3413]: E0123 23:34:47.960504 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:47.960547 kubelet[3413]: W0123 23:34:47.960543 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:47.960670 kubelet[3413]: E0123 23:34:47.960567 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:47.960973 kubelet[3413]: E0123 23:34:47.960898 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:47.961052 kubelet[3413]: W0123 23:34:47.960970 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:47.961052 kubelet[3413]: E0123 23:34:47.960992 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:47.961429 kubelet[3413]: E0123 23:34:47.961400 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:47.961503 kubelet[3413]: W0123 23:34:47.961427 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:47.961503 kubelet[3413]: E0123 23:34:47.961450 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:47.961801 kubelet[3413]: E0123 23:34:47.961774 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:47.961866 kubelet[3413]: W0123 23:34:47.961800 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:47.961866 kubelet[3413]: E0123 23:34:47.961821 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:47.962149 kubelet[3413]: E0123 23:34:47.962123 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:47.962214 kubelet[3413]: W0123 23:34:47.962161 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:47.962214 kubelet[3413]: E0123 23:34:47.962184 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:47.962584 kubelet[3413]: E0123 23:34:47.962556 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:47.962646 kubelet[3413]: W0123 23:34:47.962581 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:47.962646 kubelet[3413]: E0123 23:34:47.962604 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:47.962900 kubelet[3413]: E0123 23:34:47.962874 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:47.963008 kubelet[3413]: W0123 23:34:47.962899 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:47.963008 kubelet[3413]: E0123 23:34:47.962956 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:47.963305 kubelet[3413]: E0123 23:34:47.963277 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:47.963372 kubelet[3413]: W0123 23:34:47.963303 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:47.963372 kubelet[3413]: E0123 23:34:47.963324 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:47.963668 kubelet[3413]: E0123 23:34:47.963604 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:47.963668 kubelet[3413]: W0123 23:34:47.963629 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:47.963668 kubelet[3413]: E0123 23:34:47.963651 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:47.965808 kubelet[3413]: E0123 23:34:47.965746 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:47.965808 kubelet[3413]: W0123 23:34:47.965785 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:47.966018 kubelet[3413]: E0123 23:34:47.965825 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:47.966699 kubelet[3413]: E0123 23:34:47.966651 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:47.966699 kubelet[3413]: W0123 23:34:47.966689 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:47.966869 kubelet[3413]: E0123 23:34:47.966721 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:47.969422 kubelet[3413]: E0123 23:34:47.969290 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:47.969422 kubelet[3413]: W0123 23:34:47.969353 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:47.969422 kubelet[3413]: E0123 23:34:47.969388 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:48.000000 audit[4088]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=4088 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:34:48.000000 audit[4088]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcd745ca0 a2=0 a3=1 items=0 ppid=3525 pid=4088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:48.000000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:34:48.007718 kubelet[3413]: E0123 23:34:48.007576 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:48.007718 kubelet[3413]: W0123 23:34:48.007628 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:48.007718 kubelet[3413]: E0123 23:34:48.007677 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:48.009321 kubelet[3413]: E0123 23:34:48.009178 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:48.009321 kubelet[3413]: W0123 23:34:48.009229 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:48.010379 kubelet[3413]: E0123 23:34:48.009270 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:48.010926 kubelet[3413]: E0123 23:34:48.010809 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:48.010926 kubelet[3413]: W0123 23:34:48.010840 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:48.010926 kubelet[3413]: E0123 23:34:48.010871 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:48.012399 kubelet[3413]: E0123 23:34:48.012360 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:48.012703 kubelet[3413]: W0123 23:34:48.012555 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:48.012703 kubelet[3413]: E0123 23:34:48.012593 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:48.013086 kubelet[3413]: E0123 23:34:48.013047 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:48.013086 kubelet[3413]: W0123 23:34:48.013079 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:48.013322 kubelet[3413]: E0123 23:34:48.013107 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:48.014060 kubelet[3413]: E0123 23:34:48.013979 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:48.014060 kubelet[3413]: W0123 23:34:48.014014 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:48.014060 kubelet[3413]: E0123 23:34:48.014045 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:48.013000 audit[4088]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=4088 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:34:48.013000 audit[4088]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffcd745ca0 a2=0 a3=1 items=0 ppid=3525 pid=4088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:48.013000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:34:48.015748 kubelet[3413]: E0123 23:34:48.015653 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:48.015748 kubelet[3413]: W0123 23:34:48.015688 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:48.015748 kubelet[3413]: E0123 23:34:48.015718 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:48.016523 kubelet[3413]: E0123 23:34:48.016304 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:48.016523 kubelet[3413]: W0123 23:34:48.016328 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:48.016523 kubelet[3413]: E0123 23:34:48.016350 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:48.016825 kubelet[3413]: E0123 23:34:48.016804 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:48.017590 kubelet[3413]: W0123 23:34:48.017004 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:48.017949 kubelet[3413]: E0123 23:34:48.017706 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:48.018177 kubelet[3413]: E0123 23:34:48.018135 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:48.018328 kubelet[3413]: W0123 23:34:48.018265 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:48.018461 kubelet[3413]: E0123 23:34:48.018436 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:48.018897 kubelet[3413]: E0123 23:34:48.018875 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:48.019219 kubelet[3413]: W0123 23:34:48.019013 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:48.019219 kubelet[3413]: E0123 23:34:48.019041 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:48.019457 kubelet[3413]: E0123 23:34:48.019436 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:48.019551 kubelet[3413]: W0123 23:34:48.019530 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:48.019669 kubelet[3413]: E0123 23:34:48.019646 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:48.020107 kubelet[3413]: E0123 23:34:48.020085 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:48.020406 kubelet[3413]: W0123 23:34:48.020210 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:48.020406 kubelet[3413]: E0123 23:34:48.020239 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:48.020636 kubelet[3413]: E0123 23:34:48.020616 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:48.020738 kubelet[3413]: W0123 23:34:48.020717 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:48.021025 kubelet[3413]: E0123 23:34:48.020833 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:48.021435 kubelet[3413]: E0123 23:34:48.021413 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:48.021546 kubelet[3413]: W0123 23:34:48.021523 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:48.021658 kubelet[3413]: E0123 23:34:48.021631 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:48.022349 kubelet[3413]: E0123 23:34:48.022288 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:48.022349 kubelet[3413]: W0123 23:34:48.022328 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:48.022536 kubelet[3413]: E0123 23:34:48.022354 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:48.023352 kubelet[3413]: E0123 23:34:48.023107 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:48.023352 kubelet[3413]: W0123 23:34:48.023136 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:48.023352 kubelet[3413]: E0123 23:34:48.023163 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:48.023674 kubelet[3413]: E0123 23:34:48.023653 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:34:48.023774 kubelet[3413]: W0123 23:34:48.023753 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:34:48.024000 kubelet[3413]: E0123 23:34:48.023871 3413 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:34:48.214956 containerd[1972]: time="2026-01-23T23:34:48.212503695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:34:48.215996 containerd[1972]: time="2026-01-23T23:34:48.215897151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=3258" Jan 23 23:34:48.218362 containerd[1972]: time="2026-01-23T23:34:48.218280159Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:34:48.223136 containerd[1972]: time="2026-01-23T23:34:48.223055259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:34:48.224554 containerd[1972]: time="2026-01-23T23:34:48.224436891Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.194378089s" Jan 23 23:34:48.224554 containerd[1972]: time="2026-01-23T23:34:48.224495091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 23 23:34:48.233315 containerd[1972]: time="2026-01-23T23:34:48.233238244Z" level=info msg="CreateContainer within sandbox \"89062fcfd2be05ea07c37b0b2d9f610db1d00cc28253d9135d19f4a835c3b2af\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 23:34:48.256939 containerd[1972]: time="2026-01-23T23:34:48.253237144Z" level=info msg="Container 4c055e13b146e589e408c0a232085f2370ad87cc935b3ed942c997f27cde925c: CDI devices from CRI Config.CDIDevices: []" Jan 23 23:34:48.272451 containerd[1972]: time="2026-01-23T23:34:48.272399668Z" level=info msg="CreateContainer within sandbox \"89062fcfd2be05ea07c37b0b2d9f610db1d00cc28253d9135d19f4a835c3b2af\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4c055e13b146e589e408c0a232085f2370ad87cc935b3ed942c997f27cde925c\"" Jan 23 23:34:48.274322 containerd[1972]: time="2026-01-23T23:34:48.273883396Z" level=info msg="StartContainer for \"4c055e13b146e589e408c0a232085f2370ad87cc935b3ed942c997f27cde925c\"" Jan 23 23:34:48.279648 containerd[1972]: time="2026-01-23T23:34:48.279553564Z" level=info msg="connecting to shim 4c055e13b146e589e408c0a232085f2370ad87cc935b3ed942c997f27cde925c" address="unix:///run/containerd/s/7f6ff468b27ad1f0d9dd5bc74be9191d9289b9e8edf738d1a9bbeef05db12cc0" protocol=ttrpc version=3 Jan 23 23:34:48.318263 systemd[1]: Started cri-containerd-4c055e13b146e589e408c0a232085f2370ad87cc935b3ed942c997f27cde925c.scope - libcontainer container 4c055e13b146e589e408c0a232085f2370ad87cc935b3ed942c997f27cde925c. Jan 23 23:34:48.403000 audit: BPF prog-id=169 op=LOAD Jan 23 23:34:48.403000 audit[4111]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3931 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:48.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463303535653133623134366535383965343038633061323332303835 Jan 23 23:34:48.403000 audit: BPF prog-id=170 op=LOAD Jan 23 23:34:48.403000 audit[4111]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3931 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:48.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463303535653133623134366535383965343038633061323332303835 Jan 23 23:34:48.403000 audit: BPF prog-id=170 op=UNLOAD Jan 23 23:34:48.403000 audit[4111]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3931 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:48.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463303535653133623134366535383965343038633061323332303835 Jan 23 23:34:48.403000 audit: BPF prog-id=169 op=UNLOAD Jan 23 23:34:48.403000 audit[4111]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3931 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:48.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463303535653133623134366535383965343038633061323332303835 Jan 23 23:34:48.403000 audit: BPF prog-id=171 op=LOAD Jan 23 23:34:48.403000 audit[4111]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3931 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:48.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463303535653133623134366535383965343038633061323332303835 Jan 23 23:34:48.446675 containerd[1972]: time="2026-01-23T23:34:48.446603813Z" level=info msg="StartContainer for \"4c055e13b146e589e408c0a232085f2370ad87cc935b3ed942c997f27cde925c\" returns successfully" Jan 23 23:34:48.478767 systemd[1]: cri-containerd-4c055e13b146e589e408c0a232085f2370ad87cc935b3ed942c997f27cde925c.scope: Deactivated successfully. Jan 23 23:34:48.484000 audit: BPF prog-id=171 op=UNLOAD Jan 23 23:34:48.487473 containerd[1972]: time="2026-01-23T23:34:48.487222985Z" level=info msg="received container exit event container_id:\"4c055e13b146e589e408c0a232085f2370ad87cc935b3ed942c997f27cde925c\" id:\"4c055e13b146e589e408c0a232085f2370ad87cc935b3ed942c997f27cde925c\" pid:4123 exited_at:{seconds:1769211288 nanos:486127373}" Jan 23 23:34:48.530749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c055e13b146e589e408c0a232085f2370ad87cc935b3ed942c997f27cde925c-rootfs.mount: Deactivated successfully. Jan 23 23:34:48.608808 kubelet[3413]: E0123 23:34:48.606420 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvgjz" podUID="bc435867-361d-4b3f-a3e1-96c440fc0a66" Jan 23 23:34:48.929514 containerd[1972]: time="2026-01-23T23:34:48.928298659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 23:34:50.606009 kubelet[3413]: E0123 23:34:50.605867 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvgjz" podUID="bc435867-361d-4b3f-a3e1-96c440fc0a66" Jan 23 23:34:51.781514 containerd[1972]: time="2026-01-23T23:34:51.781317681Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:34:51.784109 containerd[1972]: time="2026-01-23T23:34:51.784047705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 23 23:34:51.785389 containerd[1972]: time="2026-01-23T23:34:51.785353437Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:34:51.795494 containerd[1972]: time="2026-01-23T23:34:51.792088533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:34:51.795732 containerd[1972]: time="2026-01-23T23:34:51.795692001Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.866559366s" Jan 23 23:34:51.795873 containerd[1972]: time="2026-01-23T23:34:51.795843849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 23 23:34:51.803008 containerd[1972]: time="2026-01-23T23:34:51.802106253Z" level=info msg="CreateContainer within sandbox \"89062fcfd2be05ea07c37b0b2d9f610db1d00cc28253d9135d19f4a835c3b2af\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 23:34:51.818318 containerd[1972]: time="2026-01-23T23:34:51.818240781Z" level=info msg="Container 5ba058b0c1792225e9c8dccc6bdf418a740b091e4650786e424f36ecbf98326e: CDI devices from CRI Config.CDIDevices: []" Jan 23 23:34:51.835638 containerd[1972]: time="2026-01-23T23:34:51.835561977Z" level=info msg="CreateContainer within sandbox \"89062fcfd2be05ea07c37b0b2d9f610db1d00cc28253d9135d19f4a835c3b2af\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5ba058b0c1792225e9c8dccc6bdf418a740b091e4650786e424f36ecbf98326e\"" Jan 23 23:34:51.837256 containerd[1972]: time="2026-01-23T23:34:51.837187953Z" level=info msg="StartContainer for \"5ba058b0c1792225e9c8dccc6bdf418a740b091e4650786e424f36ecbf98326e\"" Jan 23 23:34:51.842229 containerd[1972]: time="2026-01-23T23:34:51.842175489Z" level=info msg="connecting to shim 5ba058b0c1792225e9c8dccc6bdf418a740b091e4650786e424f36ecbf98326e" address="unix:///run/containerd/s/7f6ff468b27ad1f0d9dd5bc74be9191d9289b9e8edf738d1a9bbeef05db12cc0" protocol=ttrpc version=3 Jan 23 23:34:51.879372 systemd[1]: Started cri-containerd-5ba058b0c1792225e9c8dccc6bdf418a740b091e4650786e424f36ecbf98326e.scope - libcontainer container 5ba058b0c1792225e9c8dccc6bdf418a740b091e4650786e424f36ecbf98326e. Jan 23 23:34:51.963000 audit: BPF prog-id=172 op=LOAD Jan 23 23:34:51.963000 audit[4169]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3931 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:51.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562613035386230633137393232323565396338646363633662646634 Jan 23 23:34:51.963000 audit: BPF prog-id=173 op=LOAD Jan 23 23:34:51.963000 audit[4169]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3931 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:51.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562613035386230633137393232323565396338646363633662646634 Jan 23 23:34:51.963000 audit: BPF prog-id=173 op=UNLOAD Jan 23 23:34:51.963000 audit[4169]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3931 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:51.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562613035386230633137393232323565396338646363633662646634 Jan 23 23:34:51.963000 audit: BPF prog-id=172 op=UNLOAD Jan 23 23:34:51.963000 audit[4169]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3931 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:51.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562613035386230633137393232323565396338646363633662646634 Jan 23 23:34:51.963000 audit: BPF prog-id=174 op=LOAD Jan 23 23:34:51.963000 audit[4169]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3931 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:34:51.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562613035386230633137393232323565396338646363633662646634 Jan 23 23:34:52.001445 containerd[1972]: time="2026-01-23T23:34:52.001320018Z" level=info msg="StartContainer for \"5ba058b0c1792225e9c8dccc6bdf418a740b091e4650786e424f36ecbf98326e\" returns successfully" Jan 23 23:34:52.608522 kubelet[3413]: E0123 23:34:52.608023 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvgjz" podUID="bc435867-361d-4b3f-a3e1-96c440fc0a66" Jan 23 23:34:52.977574 containerd[1972]: time="2026-01-23T23:34:52.977504207Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:34:52.987410 containerd[1972]: time="2026-01-23T23:34:52.987332375Z" level=info msg="received container exit event container_id:\"5ba058b0c1792225e9c8dccc6bdf418a740b091e4650786e424f36ecbf98326e\" id:\"5ba058b0c1792225e9c8dccc6bdf418a740b091e4650786e424f36ecbf98326e\" pid:4182 exited_at:{seconds:1769211292 nanos:986877875}" Jan 23 23:34:52.987672 systemd[1]: cri-containerd-5ba058b0c1792225e9c8dccc6bdf418a740b091e4650786e424f36ecbf98326e.scope: Deactivated successfully. Jan 23 23:34:52.995999 kernel: kauditd_printk_skb: 49 callbacks suppressed Jan 23 23:34:52.996195 kernel: audit: type=1334 audit(1769211292.990:575): prog-id=174 op=UNLOAD Jan 23 23:34:52.990000 audit: BPF prog-id=174 op=UNLOAD Jan 23 23:34:52.990156 systemd[1]: cri-containerd-5ba058b0c1792225e9c8dccc6bdf418a740b091e4650786e424f36ecbf98326e.scope: Consumed 949ms CPU time, 189.5M memory peak, 165.9M written to disk. Jan 23 23:34:53.043003 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ba058b0c1792225e9c8dccc6bdf418a740b091e4650786e424f36ecbf98326e-rootfs.mount: Deactivated successfully. Jan 23 23:34:53.059733 kubelet[3413]: I0123 23:34:53.058340 3413 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 23:34:53.156041 systemd[1]: Created slice kubepods-besteffort-pod6c551026_ffac_43ea_999f_0823acd8fbb1.slice - libcontainer container kubepods-besteffort-pod6c551026_ffac_43ea_999f_0823acd8fbb1.slice. Jan 23 23:34:53.164044 kubelet[3413]: E0123 23:34:53.163969 3413 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ip-172-31-23-100\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-23-100' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Jan 23 23:34:53.182536 systemd[1]: Created slice kubepods-burstable-podff001061_ac71_4e8e_befd_96b93fbb4d6b.slice - libcontainer container kubepods-burstable-podff001061_ac71_4e8e_befd_96b93fbb4d6b.slice. Jan 23 23:34:53.258408 systemd[1]: Created slice kubepods-burstable-pod8786d69c_e728_4edd_874c_d71c00ce627e.slice - libcontainer container kubepods-burstable-pod8786d69c_e728_4edd_874c_d71c00ce627e.slice. Jan 23 23:34:53.286133 kubelet[3413]: I0123 23:34:53.265210 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c551026-ffac-43ea-999f-0823acd8fbb1-tigera-ca-bundle\") pod \"calico-kube-controllers-57b65b9-26wjv\" (UID: \"6c551026-ffac-43ea-999f-0823acd8fbb1\") " pod="calico-system/calico-kube-controllers-57b65b9-26wjv" Jan 23 23:34:53.286133 kubelet[3413]: I0123 23:34:53.265460 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q2h7\" (UniqueName: \"kubernetes.io/projected/6c551026-ffac-43ea-999f-0823acd8fbb1-kube-api-access-4q2h7\") pod \"calico-kube-controllers-57b65b9-26wjv\" (UID: \"6c551026-ffac-43ea-999f-0823acd8fbb1\") " pod="calico-system/calico-kube-controllers-57b65b9-26wjv" Jan 23 23:34:53.286133 kubelet[3413]: I0123 23:34:53.265553 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff001061-ac71-4e8e-befd-96b93fbb4d6b-config-volume\") pod \"coredns-674b8bbfcf-5rq9p\" (UID: \"ff001061-ac71-4e8e-befd-96b93fbb4d6b\") " pod="kube-system/coredns-674b8bbfcf-5rq9p" Jan 23 23:34:53.286133 kubelet[3413]: I0123 23:34:53.265730 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvbv2\" (UniqueName: \"kubernetes.io/projected/ff001061-ac71-4e8e-befd-96b93fbb4d6b-kube-api-access-kvbv2\") pod \"coredns-674b8bbfcf-5rq9p\" (UID: \"ff001061-ac71-4e8e-befd-96b93fbb4d6b\") " pod="kube-system/coredns-674b8bbfcf-5rq9p" Jan 23 23:34:53.319573 systemd[1]: Created slice kubepods-besteffort-pod5be57303_da73_45f8_8222_a093d6ce8129.slice - libcontainer container kubepods-besteffort-pod5be57303_da73_45f8_8222_a093d6ce8129.slice. Jan 23 23:34:53.366252 kubelet[3413]: I0123 23:34:53.366172 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5be57303-da73-45f8-8222-a093d6ce8129-calico-apiserver-certs\") pod \"calico-apiserver-68b9c97bcf-2xtt8\" (UID: \"5be57303-da73-45f8-8222-a093d6ce8129\") " pod="calico-apiserver/calico-apiserver-68b9c97bcf-2xtt8" Jan 23 23:34:53.369491 kubelet[3413]: I0123 23:34:53.367311 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdbj6\" (UniqueName: \"kubernetes.io/projected/8786d69c-e728-4edd-874c-d71c00ce627e-kube-api-access-pdbj6\") pod \"coredns-674b8bbfcf-dstgq\" (UID: \"8786d69c-e728-4edd-874c-d71c00ce627e\") " pod="kube-system/coredns-674b8bbfcf-dstgq" Jan 23 23:34:53.369491 kubelet[3413]: I0123 23:34:53.368137 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8786d69c-e728-4edd-874c-d71c00ce627e-config-volume\") pod \"coredns-674b8bbfcf-dstgq\" (UID: \"8786d69c-e728-4edd-874c-d71c00ce627e\") " pod="kube-system/coredns-674b8bbfcf-dstgq" Jan 23 23:34:53.369491 kubelet[3413]: I0123 23:34:53.368217 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv8tg\" (UniqueName: \"kubernetes.io/projected/5be57303-da73-45f8-8222-a093d6ce8129-kube-api-access-pv8tg\") pod \"calico-apiserver-68b9c97bcf-2xtt8\" (UID: \"5be57303-da73-45f8-8222-a093d6ce8129\") " pod="calico-apiserver/calico-apiserver-68b9c97bcf-2xtt8" Jan 23 23:34:53.379610 systemd[1]: Created slice kubepods-besteffort-pod635ab409_2768_46a2_9edc_f4c292ab1bff.slice - libcontainer container kubepods-besteffort-pod635ab409_2768_46a2_9edc_f4c292ab1bff.slice. Jan 23 23:34:53.469733 kubelet[3413]: I0123 23:34:53.469171 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/635ab409-2768-46a2-9edc-f4c292ab1bff-whisker-backend-key-pair\") pod \"whisker-dc86cc467-psrjv\" (UID: \"635ab409-2768-46a2-9edc-f4c292ab1bff\") " pod="calico-system/whisker-dc86cc467-psrjv" Jan 23 23:34:53.469733 kubelet[3413]: I0123 23:34:53.469236 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/635ab409-2768-46a2-9edc-f4c292ab1bff-whisker-ca-bundle\") pod \"whisker-dc86cc467-psrjv\" (UID: \"635ab409-2768-46a2-9edc-f4c292ab1bff\") " pod="calico-system/whisker-dc86cc467-psrjv" Jan 23 23:34:53.469733 kubelet[3413]: I0123 23:34:53.469322 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65ddj\" (UniqueName: \"kubernetes.io/projected/635ab409-2768-46a2-9edc-f4c292ab1bff-kube-api-access-65ddj\") pod \"whisker-dc86cc467-psrjv\" (UID: \"635ab409-2768-46a2-9edc-f4c292ab1bff\") " pod="calico-system/whisker-dc86cc467-psrjv" Jan 23 23:34:53.474728 containerd[1972]: time="2026-01-23T23:34:53.474453418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57b65b9-26wjv,Uid:6c551026-ffac-43ea-999f-0823acd8fbb1,Namespace:calico-system,Attempt:0,}" Jan 23 23:34:53.474849 systemd[1]: Created slice kubepods-besteffort-pod64fe3344_4e80_444f_ba70_e34e02720a15.slice - libcontainer container kubepods-besteffort-pod64fe3344_4e80_444f_ba70_e34e02720a15.slice. Jan 23 23:34:53.561794 systemd[1]: Created slice kubepods-besteffort-pod0699514f_51e2_4aa1_86de_4ee590fe63e1.slice - libcontainer container kubepods-besteffort-pod0699514f_51e2_4aa1_86de_4ee590fe63e1.slice. Jan 23 23:34:53.577232 kubelet[3413]: I0123 23:34:53.571141 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/64fe3344-4e80-444f-ba70-e34e02720a15-calico-apiserver-certs\") pod \"calico-apiserver-778cf5d48d-s7h4h\" (UID: \"64fe3344-4e80-444f-ba70-e34e02720a15\") " pod="calico-apiserver/calico-apiserver-778cf5d48d-s7h4h" Jan 23 23:34:53.577232 kubelet[3413]: I0123 23:34:53.574183 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82mcv\" (UniqueName: \"kubernetes.io/projected/64fe3344-4e80-444f-ba70-e34e02720a15-kube-api-access-82mcv\") pod \"calico-apiserver-778cf5d48d-s7h4h\" (UID: \"64fe3344-4e80-444f-ba70-e34e02720a15\") " pod="calico-apiserver/calico-apiserver-778cf5d48d-s7h4h" Jan 23 23:34:53.616854 systemd[1]: Created slice kubepods-besteffort-pod1b0b2743_62ac_460d_ba7c_52d229e3b875.slice - libcontainer container kubepods-besteffort-pod1b0b2743_62ac_460d_ba7c_52d229e3b875.slice. Jan 23 23:34:53.633769 containerd[1972]: time="2026-01-23T23:34:53.633715282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b9c97bcf-2xtt8,Uid:5be57303-da73-45f8-8222-a093d6ce8129,Namespace:calico-apiserver,Attempt:0,}" Jan 23 23:34:53.675191 kubelet[3413]: I0123 23:34:53.675133 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb9zm\" (UniqueName: \"kubernetes.io/projected/0699514f-51e2-4aa1-86de-4ee590fe63e1-kube-api-access-sb9zm\") pod \"goldmane-666569f655-vqjnv\" (UID: \"0699514f-51e2-4aa1-86de-4ee590fe63e1\") " pod="calico-system/goldmane-666569f655-vqjnv" Jan 23 23:34:53.677115 kubelet[3413]: I0123 23:34:53.675970 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1b0b2743-62ac-460d-ba7c-52d229e3b875-calico-apiserver-certs\") pod \"calico-apiserver-68b9c97bcf-96pwk\" (UID: \"1b0b2743-62ac-460d-ba7c-52d229e3b875\") " pod="calico-apiserver/calico-apiserver-68b9c97bcf-96pwk" Jan 23 23:34:53.677115 kubelet[3413]: I0123 23:34:53.676042 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wlxm\" (UniqueName: \"kubernetes.io/projected/1b0b2743-62ac-460d-ba7c-52d229e3b875-kube-api-access-7wlxm\") pod \"calico-apiserver-68b9c97bcf-96pwk\" (UID: \"1b0b2743-62ac-460d-ba7c-52d229e3b875\") " pod="calico-apiserver/calico-apiserver-68b9c97bcf-96pwk" Jan 23 23:34:53.677115 kubelet[3413]: I0123 23:34:53.676160 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0699514f-51e2-4aa1-86de-4ee590fe63e1-config\") pod \"goldmane-666569f655-vqjnv\" (UID: \"0699514f-51e2-4aa1-86de-4ee590fe63e1\") " pod="calico-system/goldmane-666569f655-vqjnv" Jan 23 23:34:53.677115 kubelet[3413]: I0123 23:34:53.676264 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0699514f-51e2-4aa1-86de-4ee590fe63e1-goldmane-ca-bundle\") pod \"goldmane-666569f655-vqjnv\" (UID: \"0699514f-51e2-4aa1-86de-4ee590fe63e1\") " pod="calico-system/goldmane-666569f655-vqjnv" Jan 23 23:34:53.677115 kubelet[3413]: I0123 23:34:53.676311 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0699514f-51e2-4aa1-86de-4ee590fe63e1-goldmane-key-pair\") pod \"goldmane-666569f655-vqjnv\" (UID: \"0699514f-51e2-4aa1-86de-4ee590fe63e1\") " pod="calico-system/goldmane-666569f655-vqjnv" Jan 23 23:34:53.691127 containerd[1972]: time="2026-01-23T23:34:53.690875435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dc86cc467-psrjv,Uid:635ab409-2768-46a2-9edc-f4c292ab1bff,Namespace:calico-system,Attempt:0,}" Jan 23 23:34:53.826890 containerd[1972]: time="2026-01-23T23:34:53.826528667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-778cf5d48d-s7h4h,Uid:64fe3344-4e80-444f-ba70-e34e02720a15,Namespace:calico-apiserver,Attempt:0,}" Jan 23 23:34:53.872800 containerd[1972]: time="2026-01-23T23:34:53.872699964Z" level=error msg="Failed to destroy network for sandbox \"1ac16f6100e5fce9dd58ab00dc426cc0d2e577cb7030f3e716bef8c7e6e2f232\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:53.881223 containerd[1972]: time="2026-01-23T23:34:53.880045680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57b65b9-26wjv,Uid:6c551026-ffac-43ea-999f-0823acd8fbb1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ac16f6100e5fce9dd58ab00dc426cc0d2e577cb7030f3e716bef8c7e6e2f232\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:53.882390 containerd[1972]: time="2026-01-23T23:34:53.882235176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vqjnv,Uid:0699514f-51e2-4aa1-86de-4ee590fe63e1,Namespace:calico-system,Attempt:0,}" Jan 23 23:34:53.884251 kubelet[3413]: E0123 23:34:53.884138 3413 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ac16f6100e5fce9dd58ab00dc426cc0d2e577cb7030f3e716bef8c7e6e2f232\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:53.885157 kubelet[3413]: E0123 23:34:53.884448 3413 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ac16f6100e5fce9dd58ab00dc426cc0d2e577cb7030f3e716bef8c7e6e2f232\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57b65b9-26wjv" Jan 23 23:34:53.885157 kubelet[3413]: E0123 23:34:53.884979 3413 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ac16f6100e5fce9dd58ab00dc426cc0d2e577cb7030f3e716bef8c7e6e2f232\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57b65b9-26wjv" Jan 23 23:34:53.885157 kubelet[3413]: E0123 23:34:53.885087 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57b65b9-26wjv_calico-system(6c551026-ffac-43ea-999f-0823acd8fbb1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57b65b9-26wjv_calico-system(6c551026-ffac-43ea-999f-0823acd8fbb1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ac16f6100e5fce9dd58ab00dc426cc0d2e577cb7030f3e716bef8c7e6e2f232\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57b65b9-26wjv" podUID="6c551026-ffac-43ea-999f-0823acd8fbb1" Jan 23 23:34:53.928720 containerd[1972]: time="2026-01-23T23:34:53.928659108Z" level=error msg="Failed to destroy network for sandbox \"ae3c6b22d493475cad5c191712b6b4a8f190cf235391bcfd88eb7d309895dbae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:53.936435 containerd[1972]: time="2026-01-23T23:34:53.936066876Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b9c97bcf-2xtt8,Uid:5be57303-da73-45f8-8222-a093d6ce8129,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae3c6b22d493475cad5c191712b6b4a8f190cf235391bcfd88eb7d309895dbae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:53.936956 kubelet[3413]: E0123 23:34:53.936803 3413 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae3c6b22d493475cad5c191712b6b4a8f190cf235391bcfd88eb7d309895dbae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:53.936956 kubelet[3413]: E0123 23:34:53.936886 3413 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae3c6b22d493475cad5c191712b6b4a8f190cf235391bcfd88eb7d309895dbae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68b9c97bcf-2xtt8" Jan 23 23:34:53.938063 kubelet[3413]: E0123 23:34:53.937493 3413 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae3c6b22d493475cad5c191712b6b4a8f190cf235391bcfd88eb7d309895dbae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68b9c97bcf-2xtt8" Jan 23 23:34:53.938063 kubelet[3413]: E0123 23:34:53.937618 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68b9c97bcf-2xtt8_calico-apiserver(5be57303-da73-45f8-8222-a093d6ce8129)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68b9c97bcf-2xtt8_calico-apiserver(5be57303-da73-45f8-8222-a093d6ce8129)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae3c6b22d493475cad5c191712b6b4a8f190cf235391bcfd88eb7d309895dbae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68b9c97bcf-2xtt8" podUID="5be57303-da73-45f8-8222-a093d6ce8129" Jan 23 23:34:53.938357 containerd[1972]: time="2026-01-23T23:34:53.937777488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b9c97bcf-96pwk,Uid:1b0b2743-62ac-460d-ba7c-52d229e3b875,Namespace:calico-apiserver,Attempt:0,}" Jan 23 23:34:53.939159 containerd[1972]: time="2026-01-23T23:34:53.939104412Z" level=error msg="Failed to destroy network for sandbox \"6b17b9167070b99d697270a78530d663fd2c03230b1db9affdfcce15d5fd3e19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:53.948974 containerd[1972]: time="2026-01-23T23:34:53.948761160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dc86cc467-psrjv,Uid:635ab409-2768-46a2-9edc-f4c292ab1bff,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b17b9167070b99d697270a78530d663fd2c03230b1db9affdfcce15d5fd3e19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:53.950622 kubelet[3413]: E0123 23:34:53.949346 3413 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b17b9167070b99d697270a78530d663fd2c03230b1db9affdfcce15d5fd3e19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:53.950622 kubelet[3413]: E0123 23:34:53.949427 3413 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b17b9167070b99d697270a78530d663fd2c03230b1db9affdfcce15d5fd3e19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dc86cc467-psrjv" Jan 23 23:34:53.950622 kubelet[3413]: E0123 23:34:53.949460 3413 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b17b9167070b99d697270a78530d663fd2c03230b1db9affdfcce15d5fd3e19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dc86cc467-psrjv" Jan 23 23:34:53.950902 kubelet[3413]: E0123 23:34:53.949550 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-dc86cc467-psrjv_calico-system(635ab409-2768-46a2-9edc-f4c292ab1bff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-dc86cc467-psrjv_calico-system(635ab409-2768-46a2-9edc-f4c292ab1bff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b17b9167070b99d697270a78530d663fd2c03230b1db9affdfcce15d5fd3e19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-dc86cc467-psrjv" podUID="635ab409-2768-46a2-9edc-f4c292ab1bff" Jan 23 23:34:54.002476 containerd[1972]: time="2026-01-23T23:34:54.002412416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 23:34:54.104522 containerd[1972]: time="2026-01-23T23:34:54.104181873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5rq9p,Uid:ff001061-ac71-4e8e-befd-96b93fbb4d6b,Namespace:kube-system,Attempt:0,}" Jan 23 23:34:54.156051 containerd[1972]: time="2026-01-23T23:34:54.155966457Z" level=error msg="Failed to destroy network for sandbox \"ae2fd3ca8ed250c026468d77009134684dae3e82030e554580364465eec49b57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:54.161230 systemd[1]: run-netns-cni\x2de00774f9\x2d0aee\x2d25e9\x2d81a6\x2d95600dabf3c2.mount: Deactivated successfully. Jan 23 23:34:54.164763 containerd[1972]: time="2026-01-23T23:34:54.163899693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vqjnv,Uid:0699514f-51e2-4aa1-86de-4ee590fe63e1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae2fd3ca8ed250c026468d77009134684dae3e82030e554580364465eec49b57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:54.165000 kubelet[3413]: E0123 23:34:54.164228 3413 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae2fd3ca8ed250c026468d77009134684dae3e82030e554580364465eec49b57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:54.165000 kubelet[3413]: E0123 23:34:54.164309 3413 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae2fd3ca8ed250c026468d77009134684dae3e82030e554580364465eec49b57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vqjnv" Jan 23 23:34:54.165000 kubelet[3413]: E0123 23:34:54.164344 3413 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae2fd3ca8ed250c026468d77009134684dae3e82030e554580364465eec49b57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vqjnv" Jan 23 23:34:54.166486 kubelet[3413]: E0123 23:34:54.164413 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-vqjnv_calico-system(0699514f-51e2-4aa1-86de-4ee590fe63e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-vqjnv_calico-system(0699514f-51e2-4aa1-86de-4ee590fe63e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae2fd3ca8ed250c026468d77009134684dae3e82030e554580364465eec49b57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-vqjnv" podUID="0699514f-51e2-4aa1-86de-4ee590fe63e1" Jan 23 23:34:54.187934 containerd[1972]: time="2026-01-23T23:34:54.187846785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dstgq,Uid:8786d69c-e728-4edd-874c-d71c00ce627e,Namespace:kube-system,Attempt:0,}" Jan 23 23:34:54.215613 containerd[1972]: time="2026-01-23T23:34:54.215405613Z" level=error msg="Failed to destroy network for sandbox \"8b9083d1d678d61b7c2f3c0a91c2ff04905be66ce930d092c1e3381ca06998d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:54.221514 systemd[1]: run-netns-cni\x2db1692f33\x2dd88c\x2d130e\x2d6984\x2d876ffdadf1fc.mount: Deactivated successfully. Jan 23 23:34:54.226091 containerd[1972]: time="2026-01-23T23:34:54.225877893Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-778cf5d48d-s7h4h,Uid:64fe3344-4e80-444f-ba70-e34e02720a15,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b9083d1d678d61b7c2f3c0a91c2ff04905be66ce930d092c1e3381ca06998d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:54.226710 kubelet[3413]: E0123 23:34:54.226413 3413 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b9083d1d678d61b7c2f3c0a91c2ff04905be66ce930d092c1e3381ca06998d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:54.226986 kubelet[3413]: E0123 23:34:54.226642 3413 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b9083d1d678d61b7c2f3c0a91c2ff04905be66ce930d092c1e3381ca06998d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-778cf5d48d-s7h4h" Jan 23 23:34:54.227083 kubelet[3413]: E0123 23:34:54.226994 3413 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b9083d1d678d61b7c2f3c0a91c2ff04905be66ce930d092c1e3381ca06998d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-778cf5d48d-s7h4h" Jan 23 23:34:54.227147 kubelet[3413]: E0123 23:34:54.227074 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-778cf5d48d-s7h4h_calico-apiserver(64fe3344-4e80-444f-ba70-e34e02720a15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-778cf5d48d-s7h4h_calico-apiserver(64fe3344-4e80-444f-ba70-e34e02720a15)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b9083d1d678d61b7c2f3c0a91c2ff04905be66ce930d092c1e3381ca06998d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-778cf5d48d-s7h4h" podUID="64fe3344-4e80-444f-ba70-e34e02720a15" Jan 23 23:34:54.257262 containerd[1972]: time="2026-01-23T23:34:54.256212213Z" level=error msg="Failed to destroy network for sandbox \"b85f4fafb2693302c8e8cbec85ba736a722098a6db1329b05dd2bc85bfc99bfd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:54.272802 containerd[1972]: time="2026-01-23T23:34:54.272731042Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b9c97bcf-96pwk,Uid:1b0b2743-62ac-460d-ba7c-52d229e3b875,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b85f4fafb2693302c8e8cbec85ba736a722098a6db1329b05dd2bc85bfc99bfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:54.273739 kubelet[3413]: E0123 23:34:54.273487 3413 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b85f4fafb2693302c8e8cbec85ba736a722098a6db1329b05dd2bc85bfc99bfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:54.273739 kubelet[3413]: E0123 23:34:54.273590 3413 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b85f4fafb2693302c8e8cbec85ba736a722098a6db1329b05dd2bc85bfc99bfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68b9c97bcf-96pwk" Jan 23 23:34:54.273739 kubelet[3413]: E0123 23:34:54.273650 3413 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b85f4fafb2693302c8e8cbec85ba736a722098a6db1329b05dd2bc85bfc99bfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68b9c97bcf-96pwk" Jan 23 23:34:54.274429 kubelet[3413]: E0123 23:34:54.273758 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68b9c97bcf-96pwk_calico-apiserver(1b0b2743-62ac-460d-ba7c-52d229e3b875)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68b9c97bcf-96pwk_calico-apiserver(1b0b2743-62ac-460d-ba7c-52d229e3b875)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b85f4fafb2693302c8e8cbec85ba736a722098a6db1329b05dd2bc85bfc99bfd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68b9c97bcf-96pwk" podUID="1b0b2743-62ac-460d-ba7c-52d229e3b875" Jan 23 23:34:54.311756 containerd[1972]: time="2026-01-23T23:34:54.311635702Z" level=error msg="Failed to destroy network for sandbox \"3416688b519c687b8e750734b627b3014fa18d6a7ace56de633de1d97c9a0947\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:54.316191 containerd[1972]: time="2026-01-23T23:34:54.315296182Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5rq9p,Uid:ff001061-ac71-4e8e-befd-96b93fbb4d6b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3416688b519c687b8e750734b627b3014fa18d6a7ace56de633de1d97c9a0947\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:54.316394 kubelet[3413]: E0123 23:34:54.315631 3413 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3416688b519c687b8e750734b627b3014fa18d6a7ace56de633de1d97c9a0947\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:54.316394 kubelet[3413]: E0123 23:34:54.315703 3413 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3416688b519c687b8e750734b627b3014fa18d6a7ace56de633de1d97c9a0947\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5rq9p" Jan 23 23:34:54.316394 kubelet[3413]: E0123 23:34:54.315748 3413 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3416688b519c687b8e750734b627b3014fa18d6a7ace56de633de1d97c9a0947\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5rq9p" Jan 23 23:34:54.316571 kubelet[3413]: E0123 23:34:54.315832 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-5rq9p_kube-system(ff001061-ac71-4e8e-befd-96b93fbb4d6b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-5rq9p_kube-system(ff001061-ac71-4e8e-befd-96b93fbb4d6b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3416688b519c687b8e750734b627b3014fa18d6a7ace56de633de1d97c9a0947\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-5rq9p" podUID="ff001061-ac71-4e8e-befd-96b93fbb4d6b" Jan 23 23:34:54.345734 containerd[1972]: time="2026-01-23T23:34:54.345664174Z" level=error msg="Failed to destroy network for sandbox \"48eaa0a0c4cfe61abeaeb97a19c54dbe4a86b98061ace156bddf0a92c3a7d9de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:54.348333 containerd[1972]: time="2026-01-23T23:34:54.348267022Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dstgq,Uid:8786d69c-e728-4edd-874c-d71c00ce627e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"48eaa0a0c4cfe61abeaeb97a19c54dbe4a86b98061ace156bddf0a92c3a7d9de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:54.348817 kubelet[3413]: E0123 23:34:54.348597 3413 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48eaa0a0c4cfe61abeaeb97a19c54dbe4a86b98061ace156bddf0a92c3a7d9de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:54.348817 kubelet[3413]: E0123 23:34:54.348670 3413 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48eaa0a0c4cfe61abeaeb97a19c54dbe4a86b98061ace156bddf0a92c3a7d9de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dstgq" Jan 23 23:34:54.348817 kubelet[3413]: E0123 23:34:54.348709 3413 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48eaa0a0c4cfe61abeaeb97a19c54dbe4a86b98061ace156bddf0a92c3a7d9de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dstgq" Jan 23 23:34:54.349178 kubelet[3413]: E0123 23:34:54.348799 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dstgq_kube-system(8786d69c-e728-4edd-874c-d71c00ce627e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dstgq_kube-system(8786d69c-e728-4edd-874c-d71c00ce627e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48eaa0a0c4cfe61abeaeb97a19c54dbe4a86b98061ace156bddf0a92c3a7d9de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dstgq" podUID="8786d69c-e728-4edd-874c-d71c00ce627e" Jan 23 23:34:54.620164 systemd[1]: Created slice kubepods-besteffort-podbc435867_361d_4b3f_a3e1_96c440fc0a66.slice - libcontainer container kubepods-besteffort-podbc435867_361d_4b3f_a3e1_96c440fc0a66.slice. Jan 23 23:34:54.625039 containerd[1972]: time="2026-01-23T23:34:54.624771839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cvgjz,Uid:bc435867-361d-4b3f-a3e1-96c440fc0a66,Namespace:calico-system,Attempt:0,}" Jan 23 23:34:54.716426 containerd[1972]: time="2026-01-23T23:34:54.716367144Z" level=error msg="Failed to destroy network for sandbox \"ecdab3d2f4a226cb701253264163bb09c08af4fa2cbd037debfad0fc57d14f71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:54.720103 containerd[1972]: time="2026-01-23T23:34:54.719894832Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cvgjz,Uid:bc435867-361d-4b3f-a3e1-96c440fc0a66,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecdab3d2f4a226cb701253264163bb09c08af4fa2cbd037debfad0fc57d14f71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:54.720647 kubelet[3413]: E0123 23:34:54.720590 3413 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecdab3d2f4a226cb701253264163bb09c08af4fa2cbd037debfad0fc57d14f71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:34:54.721700 kubelet[3413]: E0123 23:34:54.720677 3413 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecdab3d2f4a226cb701253264163bb09c08af4fa2cbd037debfad0fc57d14f71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cvgjz" Jan 23 23:34:54.721700 kubelet[3413]: E0123 23:34:54.720721 3413 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecdab3d2f4a226cb701253264163bb09c08af4fa2cbd037debfad0fc57d14f71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cvgjz" Jan 23 23:34:54.721700 kubelet[3413]: E0123 23:34:54.720810 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cvgjz_calico-system(bc435867-361d-4b3f-a3e1-96c440fc0a66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cvgjz_calico-system(bc435867-361d-4b3f-a3e1-96c440fc0a66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ecdab3d2f4a226cb701253264163bb09c08af4fa2cbd037debfad0fc57d14f71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cvgjz" podUID="bc435867-361d-4b3f-a3e1-96c440fc0a66" Jan 23 23:34:55.043301 systemd[1]: run-netns-cni\x2d241d1133\x2d368f\x2df24e\x2d9f6c\x2da7f88a32b982.mount: Deactivated successfully. Jan 23 23:34:55.043774 systemd[1]: run-netns-cni\x2d34c64e00\x2de0c9\x2d3f2b\x2dc054\x2df117a14defbc.mount: Deactivated successfully. Jan 23 23:34:55.044057 systemd[1]: run-netns-cni\x2d9e8101ad\x2dd8a9\x2d6f23\x2db77c\x2d3ab6debd8159.mount: Deactivated successfully. Jan 23 23:35:00.070141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1087878974.mount: Deactivated successfully. Jan 23 23:35:00.185800 containerd[1972]: time="2026-01-23T23:35:00.185720307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:35:00.187886 containerd[1972]: time="2026-01-23T23:35:00.187814895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 23 23:35:00.261431 containerd[1972]: time="2026-01-23T23:35:00.261344499Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:35:00.267295 containerd[1972]: time="2026-01-23T23:35:00.267211983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:35:00.268956 containerd[1972]: time="2026-01-23T23:35:00.268333659Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.265853155s" Jan 23 23:35:00.268956 containerd[1972]: time="2026-01-23T23:35:00.268395231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 23 23:35:00.302457 containerd[1972]: time="2026-01-23T23:35:00.302397243Z" level=info msg="CreateContainer within sandbox \"89062fcfd2be05ea07c37b0b2d9f610db1d00cc28253d9135d19f4a835c3b2af\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 23:35:00.352468 containerd[1972]: time="2026-01-23T23:35:00.352345696Z" level=info msg="Container 1d6848c0c2717d2daf9fabca6b7413b605a088febbf244967f6c688be1dee315: CDI devices from CRI Config.CDIDevices: []" Jan 23 23:35:00.402687 containerd[1972]: time="2026-01-23T23:35:00.402606364Z" level=info msg="CreateContainer within sandbox \"89062fcfd2be05ea07c37b0b2d9f610db1d00cc28253d9135d19f4a835c3b2af\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1d6848c0c2717d2daf9fabca6b7413b605a088febbf244967f6c688be1dee315\"" Jan 23 23:35:00.405224 containerd[1972]: time="2026-01-23T23:35:00.403731592Z" level=info msg="StartContainer for \"1d6848c0c2717d2daf9fabca6b7413b605a088febbf244967f6c688be1dee315\"" Jan 23 23:35:00.410651 containerd[1972]: time="2026-01-23T23:35:00.410597272Z" level=info msg="connecting to shim 1d6848c0c2717d2daf9fabca6b7413b605a088febbf244967f6c688be1dee315" address="unix:///run/containerd/s/7f6ff468b27ad1f0d9dd5bc74be9191d9289b9e8edf738d1a9bbeef05db12cc0" protocol=ttrpc version=3 Jan 23 23:35:00.446546 systemd[1]: Started cri-containerd-1d6848c0c2717d2daf9fabca6b7413b605a088febbf244967f6c688be1dee315.scope - libcontainer container 1d6848c0c2717d2daf9fabca6b7413b605a088febbf244967f6c688be1dee315. Jan 23 23:35:00.543000 audit: BPF prog-id=175 op=LOAD Jan 23 23:35:00.543000 audit[4462]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3931 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:00.553025 kernel: audit: type=1334 audit(1769211300.543:576): prog-id=175 op=LOAD Jan 23 23:35:00.553210 kernel: audit: type=1300 audit(1769211300.543:576): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3931 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:00.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164363834386330633237313764326461663966616263613662373431 Jan 23 23:35:00.559321 kernel: audit: type=1327 audit(1769211300.543:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164363834386330633237313764326461663966616263613662373431 Jan 23 23:35:00.543000 audit: BPF prog-id=176 op=LOAD Jan 23 23:35:00.561681 kernel: audit: type=1334 audit(1769211300.543:577): prog-id=176 op=LOAD Jan 23 23:35:00.561858 kernel: audit: type=1300 audit(1769211300.543:577): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3931 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:00.543000 audit[4462]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3931 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:00.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164363834386330633237313764326461663966616263613662373431 Jan 23 23:35:00.574129 kernel: audit: type=1327 audit(1769211300.543:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164363834386330633237313764326461663966616263613662373431 Jan 23 23:35:00.543000 audit: BPF prog-id=176 op=UNLOAD Jan 23 23:35:00.575948 kernel: audit: type=1334 audit(1769211300.543:578): prog-id=176 op=UNLOAD Jan 23 23:35:00.576176 kernel: audit: type=1300 audit(1769211300.543:578): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3931 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:00.543000 audit[4462]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3931 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:00.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164363834386330633237313764326461663966616263613662373431 Jan 23 23:35:00.588952 kernel: audit: type=1327 audit(1769211300.543:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164363834386330633237313764326461663966616263613662373431 Jan 23 23:35:00.543000 audit: BPF prog-id=175 op=UNLOAD Jan 23 23:35:00.591081 kernel: audit: type=1334 audit(1769211300.543:579): prog-id=175 op=UNLOAD Jan 23 23:35:00.543000 audit[4462]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3931 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:00.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164363834386330633237313764326461663966616263613662373431 Jan 23 23:35:00.543000 audit: BPF prog-id=177 op=LOAD Jan 23 23:35:00.543000 audit[4462]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3931 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:00.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164363834386330633237313764326461663966616263613662373431 Jan 23 23:35:00.634553 containerd[1972]: time="2026-01-23T23:35:00.634404329Z" level=info msg="StartContainer for \"1d6848c0c2717d2daf9fabca6b7413b605a088febbf244967f6c688be1dee315\" returns successfully" Jan 23 23:35:00.894163 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 23:35:00.894301 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 23:35:01.124199 kubelet[3413]: I0123 23:35:01.124011 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8q2pl" podStartSLOduration=1.870814925 podStartE2EDuration="17.123984184s" podCreationTimestamp="2026-01-23 23:34:44 +0000 UTC" firstStartedPulling="2026-01-23 23:34:45.0170526 +0000 UTC m=+34.677014382" lastFinishedPulling="2026-01-23 23:35:00.270221871 +0000 UTC m=+49.930183641" observedRunningTime="2026-01-23 23:35:01.123580072 +0000 UTC m=+50.783541878" watchObservedRunningTime="2026-01-23 23:35:01.123984184 +0000 UTC m=+50.783945954" Jan 23 23:35:01.251539 kubelet[3413]: I0123 23:35:01.251205 3413 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/635ab409-2768-46a2-9edc-f4c292ab1bff-whisker-ca-bundle\") pod \"635ab409-2768-46a2-9edc-f4c292ab1bff\" (UID: \"635ab409-2768-46a2-9edc-f4c292ab1bff\") " Jan 23 23:35:01.254106 kubelet[3413]: I0123 23:35:01.252103 3413 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/635ab409-2768-46a2-9edc-f4c292ab1bff-whisker-backend-key-pair\") pod \"635ab409-2768-46a2-9edc-f4c292ab1bff\" (UID: \"635ab409-2768-46a2-9edc-f4c292ab1bff\") " Jan 23 23:35:01.254106 kubelet[3413]: I0123 23:35:01.253545 3413 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65ddj\" (UniqueName: \"kubernetes.io/projected/635ab409-2768-46a2-9edc-f4c292ab1bff-kube-api-access-65ddj\") pod \"635ab409-2768-46a2-9edc-f4c292ab1bff\" (UID: \"635ab409-2768-46a2-9edc-f4c292ab1bff\") " Jan 23 23:35:01.262975 kubelet[3413]: I0123 23:35:01.262101 3413 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/635ab409-2768-46a2-9edc-f4c292ab1bff-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "635ab409-2768-46a2-9edc-f4c292ab1bff" (UID: "635ab409-2768-46a2-9edc-f4c292ab1bff"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 23:35:01.272599 systemd[1]: var-lib-kubelet-pods-635ab409\x2d2768\x2d46a2\x2d9edc\x2df4c292ab1bff-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 23:35:01.275182 kubelet[3413]: I0123 23:35:01.274968 3413 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/635ab409-2768-46a2-9edc-f4c292ab1bff-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "635ab409-2768-46a2-9edc-f4c292ab1bff" (UID: "635ab409-2768-46a2-9edc-f4c292ab1bff"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 23:35:01.293944 systemd[1]: var-lib-kubelet-pods-635ab409\x2d2768\x2d46a2\x2d9edc\x2df4c292ab1bff-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d65ddj.mount: Deactivated successfully. Jan 23 23:35:01.316439 kubelet[3413]: I0123 23:35:01.316355 3413 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/635ab409-2768-46a2-9edc-f4c292ab1bff-kube-api-access-65ddj" (OuterVolumeSpecName: "kube-api-access-65ddj") pod "635ab409-2768-46a2-9edc-f4c292ab1bff" (UID: "635ab409-2768-46a2-9edc-f4c292ab1bff"). InnerVolumeSpecName "kube-api-access-65ddj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:35:01.355338 kubelet[3413]: I0123 23:35:01.355228 3413 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/635ab409-2768-46a2-9edc-f4c292ab1bff-whisker-ca-bundle\") on node \"ip-172-31-23-100\" DevicePath \"\"" Jan 23 23:35:01.355338 kubelet[3413]: I0123 23:35:01.355274 3413 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/635ab409-2768-46a2-9edc-f4c292ab1bff-whisker-backend-key-pair\") on node \"ip-172-31-23-100\" DevicePath \"\"" Jan 23 23:35:01.355338 kubelet[3413]: I0123 23:35:01.355299 3413 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-65ddj\" (UniqueName: \"kubernetes.io/projected/635ab409-2768-46a2-9edc-f4c292ab1bff-kube-api-access-65ddj\") on node \"ip-172-31-23-100\" DevicePath \"\"" Jan 23 23:35:02.057532 systemd[1]: Removed slice kubepods-besteffort-pod635ab409_2768_46a2_9edc_f4c292ab1bff.slice - libcontainer container kubepods-besteffort-pod635ab409_2768_46a2_9edc_f4c292ab1bff.slice. Jan 23 23:35:02.194560 systemd[1]: Created slice kubepods-besteffort-pod24e24657_8e54_4ae7_acdb_2eda45aabbdf.slice - libcontainer container kubepods-besteffort-pod24e24657_8e54_4ae7_acdb_2eda45aabbdf.slice. Jan 23 23:35:02.262326 kubelet[3413]: I0123 23:35:02.262261 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/24e24657-8e54-4ae7-acdb-2eda45aabbdf-whisker-backend-key-pair\") pod \"whisker-f9b44c667-h6s7s\" (UID: \"24e24657-8e54-4ae7-acdb-2eda45aabbdf\") " pod="calico-system/whisker-f9b44c667-h6s7s" Jan 23 23:35:02.263014 kubelet[3413]: I0123 23:35:02.262332 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krc7r\" (UniqueName: \"kubernetes.io/projected/24e24657-8e54-4ae7-acdb-2eda45aabbdf-kube-api-access-krc7r\") pod \"whisker-f9b44c667-h6s7s\" (UID: \"24e24657-8e54-4ae7-acdb-2eda45aabbdf\") " pod="calico-system/whisker-f9b44c667-h6s7s" Jan 23 23:35:02.263014 kubelet[3413]: I0123 23:35:02.262399 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24e24657-8e54-4ae7-acdb-2eda45aabbdf-whisker-ca-bundle\") pod \"whisker-f9b44c667-h6s7s\" (UID: \"24e24657-8e54-4ae7-acdb-2eda45aabbdf\") " pod="calico-system/whisker-f9b44c667-h6s7s" Jan 23 23:35:02.504177 containerd[1972]: time="2026-01-23T23:35:02.504094542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f9b44c667-h6s7s,Uid:24e24657-8e54-4ae7-acdb-2eda45aabbdf,Namespace:calico-system,Attempt:0,}" Jan 23 23:35:02.612957 kubelet[3413]: I0123 23:35:02.612856 3413 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="635ab409-2768-46a2-9edc-f4c292ab1bff" path="/var/lib/kubelet/pods/635ab409-2768-46a2-9edc-f4c292ab1bff/volumes" Jan 23 23:35:02.878176 systemd-networkd[1781]: caliaa217f3c85e: Link UP Jan 23 23:35:02.879442 systemd-networkd[1781]: caliaa217f3c85e: Gained carrier Jan 23 23:35:02.882163 (udev-worker)[4500]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:35:02.919941 containerd[1972]: 2026-01-23 23:35:02.556 [INFO][4581] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 23:35:02.919941 containerd[1972]: 2026-01-23 23:35:02.667 [INFO][4581] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--100-k8s-whisker--f9b44c667--h6s7s-eth0 whisker-f9b44c667- calico-system 24e24657-8e54-4ae7-acdb-2eda45aabbdf 933 0 2026-01-23 23:35:02 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:f9b44c667 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-23-100 whisker-f9b44c667-h6s7s eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliaa217f3c85e [] [] }} ContainerID="c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676" Namespace="calico-system" Pod="whisker-f9b44c667-h6s7s" WorkloadEndpoint="ip--172--31--23--100-k8s-whisker--f9b44c667--h6s7s-" Jan 23 23:35:02.919941 containerd[1972]: 2026-01-23 23:35:02.667 [INFO][4581] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676" Namespace="calico-system" Pod="whisker-f9b44c667-h6s7s" WorkloadEndpoint="ip--172--31--23--100-k8s-whisker--f9b44c667--h6s7s-eth0" Jan 23 23:35:02.919941 containerd[1972]: 2026-01-23 23:35:02.772 [INFO][4592] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676" HandleID="k8s-pod-network.c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676" Workload="ip--172--31--23--100-k8s-whisker--f9b44c667--h6s7s-eth0" Jan 23 23:35:02.920296 containerd[1972]: 2026-01-23 23:35:02.772 [INFO][4592] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676" HandleID="k8s-pod-network.c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676" Workload="ip--172--31--23--100-k8s-whisker--f9b44c667--h6s7s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103940), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-100", "pod":"whisker-f9b44c667-h6s7s", "timestamp":"2026-01-23 23:35:02.772361804 +0000 UTC"}, Hostname:"ip-172-31-23-100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:35:02.920296 containerd[1972]: 2026-01-23 23:35:02.773 [INFO][4592] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:35:02.920296 containerd[1972]: 2026-01-23 23:35:02.773 [INFO][4592] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:35:02.920296 containerd[1972]: 2026-01-23 23:35:02.773 [INFO][4592] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-100' Jan 23 23:35:02.920296 containerd[1972]: 2026-01-23 23:35:02.794 [INFO][4592] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676" host="ip-172-31-23-100" Jan 23 23:35:02.920296 containerd[1972]: 2026-01-23 23:35:02.809 [INFO][4592] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-100" Jan 23 23:35:02.920296 containerd[1972]: 2026-01-23 23:35:02.820 [INFO][4592] ipam/ipam.go 511: Trying affinity for 192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:02.920296 containerd[1972]: 2026-01-23 23:35:02.824 [INFO][4592] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:02.920296 containerd[1972]: 2026-01-23 23:35:02.829 [INFO][4592] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:02.923082 containerd[1972]: 2026-01-23 23:35:02.829 [INFO][4592] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676" host="ip-172-31-23-100" Jan 23 23:35:02.923082 containerd[1972]: 2026-01-23 23:35:02.832 [INFO][4592] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676 Jan 23 23:35:02.923082 containerd[1972]: 2026-01-23 23:35:02.842 [INFO][4592] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676" host="ip-172-31-23-100" Jan 23 23:35:02.923082 containerd[1972]: 2026-01-23 23:35:02.854 [INFO][4592] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.1/26] block=192.168.22.0/26 handle="k8s-pod-network.c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676" host="ip-172-31-23-100" Jan 23 23:35:02.923082 containerd[1972]: 2026-01-23 23:35:02.854 [INFO][4592] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.1/26] handle="k8s-pod-network.c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676" host="ip-172-31-23-100" Jan 23 23:35:02.923082 containerd[1972]: 2026-01-23 23:35:02.854 [INFO][4592] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:35:02.923082 containerd[1972]: 2026-01-23 23:35:02.854 [INFO][4592] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.1/26] IPv6=[] ContainerID="c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676" HandleID="k8s-pod-network.c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676" Workload="ip--172--31--23--100-k8s-whisker--f9b44c667--h6s7s-eth0" Jan 23 23:35:02.923415 containerd[1972]: 2026-01-23 23:35:02.861 [INFO][4581] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676" Namespace="calico-system" Pod="whisker-f9b44c667-h6s7s" WorkloadEndpoint="ip--172--31--23--100-k8s-whisker--f9b44c667--h6s7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--100-k8s-whisker--f9b44c667--h6s7s-eth0", GenerateName:"whisker-f9b44c667-", Namespace:"calico-system", SelfLink:"", UID:"24e24657-8e54-4ae7-acdb-2eda45aabbdf", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 35, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f9b44c667", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-100", ContainerID:"", Pod:"whisker-f9b44c667-h6s7s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.22.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliaa217f3c85e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:35:02.923415 containerd[1972]: 2026-01-23 23:35:02.861 [INFO][4581] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.1/32] ContainerID="c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676" Namespace="calico-system" Pod="whisker-f9b44c667-h6s7s" WorkloadEndpoint="ip--172--31--23--100-k8s-whisker--f9b44c667--h6s7s-eth0" Jan 23 23:35:02.923610 containerd[1972]: 2026-01-23 23:35:02.861 [INFO][4581] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa217f3c85e ContainerID="c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676" Namespace="calico-system" Pod="whisker-f9b44c667-h6s7s" WorkloadEndpoint="ip--172--31--23--100-k8s-whisker--f9b44c667--h6s7s-eth0" Jan 23 23:35:02.923610 containerd[1972]: 2026-01-23 23:35:02.885 [INFO][4581] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676" Namespace="calico-system" Pod="whisker-f9b44c667-h6s7s" WorkloadEndpoint="ip--172--31--23--100-k8s-whisker--f9b44c667--h6s7s-eth0" Jan 23 23:35:02.923709 containerd[1972]: 2026-01-23 23:35:02.885 [INFO][4581] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676" Namespace="calico-system" Pod="whisker-f9b44c667-h6s7s" WorkloadEndpoint="ip--172--31--23--100-k8s-whisker--f9b44c667--h6s7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--100-k8s-whisker--f9b44c667--h6s7s-eth0", GenerateName:"whisker-f9b44c667-", Namespace:"calico-system", SelfLink:"", UID:"24e24657-8e54-4ae7-acdb-2eda45aabbdf", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 35, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f9b44c667", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-100", ContainerID:"c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676", Pod:"whisker-f9b44c667-h6s7s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.22.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliaa217f3c85e", MAC:"c6:ac:9b:34:88:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:35:02.923830 containerd[1972]: 2026-01-23 23:35:02.912 [INFO][4581] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676" Namespace="calico-system" Pod="whisker-f9b44c667-h6s7s" WorkloadEndpoint="ip--172--31--23--100-k8s-whisker--f9b44c667--h6s7s-eth0" Jan 23 23:35:03.039537 containerd[1972]: time="2026-01-23T23:35:03.038659445Z" level=info msg="connecting to shim c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676" address="unix:///run/containerd/s/dc8330c3bdb353b9ed6bd221e6070c5f0491c67b5ae1a1667f2cf609c9265fac" namespace=k8s.io protocol=ttrpc version=3 Jan 23 23:35:03.108413 systemd[1]: Started cri-containerd-c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676.scope - libcontainer container c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676. Jan 23 23:35:03.164000 audit: BPF prog-id=178 op=LOAD Jan 23 23:35:03.167000 audit: BPF prog-id=179 op=LOAD Jan 23 23:35:03.167000 audit[4691]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=4676 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:03.167000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653261616639336131373864336664643637363264623436636439 Jan 23 23:35:03.167000 audit: BPF prog-id=179 op=UNLOAD Jan 23 23:35:03.167000 audit[4691]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4676 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:03.167000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653261616639336131373864336664643637363264623436636439 Jan 23 23:35:03.168000 audit: BPF prog-id=180 op=LOAD Jan 23 23:35:03.168000 audit[4691]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=4676 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:03.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653261616639336131373864336664643637363264623436636439 Jan 23 23:35:03.168000 audit: BPF prog-id=181 op=LOAD Jan 23 23:35:03.168000 audit[4691]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=4676 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:03.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653261616639336131373864336664643637363264623436636439 Jan 23 23:35:03.168000 audit: BPF prog-id=181 op=UNLOAD Jan 23 23:35:03.168000 audit[4691]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4676 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:03.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653261616639336131373864336664643637363264623436636439 Jan 23 23:35:03.168000 audit: BPF prog-id=180 op=UNLOAD Jan 23 23:35:03.168000 audit[4691]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4676 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:03.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653261616639336131373864336664643637363264623436636439 Jan 23 23:35:03.168000 audit: BPF prog-id=182 op=LOAD Jan 23 23:35:03.168000 audit[4691]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=4676 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:03.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653261616639336131373864336664643637363264623436636439 Jan 23 23:35:03.267121 containerd[1972]: time="2026-01-23T23:35:03.267016890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f9b44c667-h6s7s,Uid:24e24657-8e54-4ae7-acdb-2eda45aabbdf,Namespace:calico-system,Attempt:0,} returns sandbox id \"c6e2aaf93a178d3fdd6762db46cd91104063139e9ac9201ec20ee0f3c53c4676\"" Jan 23 23:35:03.274537 containerd[1972]: time="2026-01-23T23:35:03.273340998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:35:03.561211 containerd[1972]: time="2026-01-23T23:35:03.560739512Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:03.564780 containerd[1972]: time="2026-01-23T23:35:03.564288776Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:35:03.564780 containerd[1972]: time="2026-01-23T23:35:03.564425312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:03.565898 kubelet[3413]: E0123 23:35:03.564852 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:35:03.565898 kubelet[3413]: E0123 23:35:03.564988 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:35:03.568674 kubelet[3413]: E0123 23:35:03.568566 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b96f1a59a5d8446cad0dc01e07796dc4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-krc7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f9b44c667-h6s7s_calico-system(24e24657-8e54-4ae7-acdb-2eda45aabbdf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:03.573940 containerd[1972]: time="2026-01-23T23:35:03.571886576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:35:03.615000 audit: BPF prog-id=183 op=LOAD Jan 23 23:35:03.615000 audit[4751]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe37b7fa8 a2=98 a3=ffffe37b7f98 items=0 ppid=4633 pid=4751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:03.615000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 23:35:03.615000 audit: BPF prog-id=183 op=UNLOAD Jan 23 23:35:03.615000 audit[4751]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe37b7f78 a3=0 items=0 ppid=4633 pid=4751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:03.615000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 23:35:03.616000 audit: BPF prog-id=184 op=LOAD Jan 23 23:35:03.616000 audit[4751]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe37b7e58 a2=74 a3=95 items=0 ppid=4633 pid=4751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:03.616000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 23:35:03.617000 audit: BPF prog-id=184 op=UNLOAD Jan 23 23:35:03.617000 audit[4751]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4633 pid=4751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:03.617000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 23:35:03.617000 audit: BPF prog-id=185 op=LOAD Jan 23 23:35:03.617000 audit[4751]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe37b7e88 a2=40 a3=ffffe37b7eb8 items=0 ppid=4633 pid=4751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:03.617000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 23:35:03.617000 audit: BPF prog-id=185 op=UNLOAD Jan 23 23:35:03.617000 audit[4751]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffe37b7eb8 items=0 ppid=4633 pid=4751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:03.617000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 23:35:03.627000 audit: BPF prog-id=186 op=LOAD Jan 23 23:35:03.627000 audit[4752]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc12cfa58 a2=98 a3=ffffc12cfa48 items=0 ppid=4633 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:03.627000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 23:35:03.629000 audit: BPF prog-id=186 op=UNLOAD Jan 23 23:35:03.629000 audit[4752]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc12cfa28 a3=0 items=0 ppid=4633 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:03.629000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 23:35:03.629000 audit: BPF prog-id=187 op=LOAD Jan 23 23:35:03.629000 audit[4752]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc12cf6e8 a2=74 a3=95 items=0 ppid=4633 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:03.629000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 23:35:03.630000 audit: BPF prog-id=187 op=UNLOAD Jan 23 23:35:03.630000 audit[4752]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4633 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:03.630000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 23:35:03.630000 audit: BPF prog-id=188 op=LOAD Jan 23 23:35:03.630000 audit[4752]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc12cf748 a2=94 a3=2 items=0 ppid=4633 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:03.630000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 23:35:03.630000 audit: BPF prog-id=188 op=UNLOAD Jan 23 23:35:03.630000 audit[4752]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4633 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:03.630000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 23:35:03.833944 containerd[1972]: time="2026-01-23T23:35:03.833711721Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:03.836206 containerd[1972]: time="2026-01-23T23:35:03.836101077Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:35:03.836392 containerd[1972]: time="2026-01-23T23:35:03.836288229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:03.837478 kubelet[3413]: E0123 23:35:03.837360 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:35:03.838955 kubelet[3413]: E0123 23:35:03.837731 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:35:03.839115 kubelet[3413]: E0123 23:35:03.837953 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-krc7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f9b44c667-h6s7s_calico-system(24e24657-8e54-4ae7-acdb-2eda45aabbdf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:03.840579 kubelet[3413]: E0123 23:35:03.839696 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f9b44c667-h6s7s" podUID="24e24657-8e54-4ae7-acdb-2eda45aabbdf" Jan 23 23:35:04.052104 kubelet[3413]: E0123 23:35:04.052021 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f9b44c667-h6s7s" podUID="24e24657-8e54-4ae7-acdb-2eda45aabbdf" Jan 23 23:35:04.095000 audit[4778]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4778 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:35:04.095000 audit[4778]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffdcd34110 a2=0 a3=1 items=0 ppid=3525 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.095000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:35:04.100000 audit[4778]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4778 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:35:04.100000 audit[4778]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffdcd34110 a2=0 a3=1 items=0 ppid=3525 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.100000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:35:04.165000 audit: BPF prog-id=189 op=LOAD Jan 23 23:35:04.165000 audit[4752]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc12cf708 a2=40 a3=ffffc12cf738 items=0 ppid=4633 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.165000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 23:35:04.165000 audit: BPF prog-id=189 op=UNLOAD Jan 23 23:35:04.165000 audit[4752]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffc12cf738 items=0 ppid=4633 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.165000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 23:35:04.185000 audit: BPF prog-id=190 op=LOAD Jan 23 23:35:04.185000 audit[4752]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc12cf718 a2=94 a3=4 items=0 ppid=4633 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.185000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 23:35:04.186000 audit: BPF prog-id=190 op=UNLOAD Jan 23 23:35:04.186000 audit[4752]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4633 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.186000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 23:35:04.187000 audit: BPF prog-id=191 op=LOAD Jan 23 23:35:04.187000 audit[4752]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc12cf558 a2=94 a3=5 items=0 ppid=4633 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.187000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 23:35:04.187000 audit: BPF prog-id=191 op=UNLOAD Jan 23 23:35:04.187000 audit[4752]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4633 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.187000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 23:35:04.188000 audit: BPF prog-id=192 op=LOAD Jan 23 23:35:04.188000 audit[4752]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc12cf788 a2=94 a3=6 items=0 ppid=4633 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.188000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 23:35:04.188000 audit: BPF prog-id=192 op=UNLOAD Jan 23 23:35:04.188000 audit[4752]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4633 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.188000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 23:35:04.189000 audit: BPF prog-id=193 op=LOAD Jan 23 23:35:04.189000 audit[4752]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc12cef58 a2=94 a3=83 items=0 ppid=4633 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 23:35:04.190000 audit: BPF prog-id=194 op=LOAD Jan 23 23:35:04.190000 audit[4752]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffc12ced18 a2=94 a3=2 items=0 ppid=4633 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.190000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 23:35:04.190000 audit: BPF prog-id=194 op=UNLOAD Jan 23 23:35:04.190000 audit[4752]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4633 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.190000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 23:35:04.191000 audit: BPF prog-id=193 op=UNLOAD Jan 23 23:35:04.191000 audit[4752]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=1555d620 a3=15550b00 items=0 ppid=4633 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.191000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 23:35:04.211000 audit: BPF prog-id=195 op=LOAD Jan 23 23:35:04.211000 audit[4781]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe663d558 a2=98 a3=ffffe663d548 items=0 ppid=4633 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.211000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 23:35:04.211000 audit: BPF prog-id=195 op=UNLOAD Jan 23 23:35:04.211000 audit[4781]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe663d528 a3=0 items=0 ppid=4633 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.211000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 23:35:04.211000 audit: BPF prog-id=196 op=LOAD Jan 23 23:35:04.211000 audit[4781]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe663d408 a2=74 a3=95 items=0 ppid=4633 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.211000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 23:35:04.211000 audit: BPF prog-id=196 op=UNLOAD Jan 23 23:35:04.211000 audit[4781]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4633 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.211000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 23:35:04.211000 audit: BPF prog-id=197 op=LOAD Jan 23 23:35:04.211000 audit[4781]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe663d438 a2=40 a3=ffffe663d468 items=0 ppid=4633 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.211000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 23:35:04.211000 audit: BPF prog-id=197 op=UNLOAD Jan 23 23:35:04.211000 audit[4781]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffe663d468 items=0 ppid=4633 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.211000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 23:35:04.334109 systemd-networkd[1781]: caliaa217f3c85e: Gained IPv6LL Jan 23 23:35:04.343157 systemd-networkd[1781]: vxlan.calico: Link UP Jan 23 23:35:04.343289 systemd-networkd[1781]: vxlan.calico: Gained carrier Jan 23 23:35:04.390000 audit: BPF prog-id=198 op=LOAD Jan 23 23:35:04.390000 audit[4808]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcc92c7b8 a2=98 a3=ffffcc92c7a8 items=0 ppid=4633 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 23:35:04.391000 audit: BPF prog-id=198 op=UNLOAD Jan 23 23:35:04.391000 audit[4808]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffcc92c788 a3=0 items=0 ppid=4633 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.391000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 23:35:04.391000 audit: BPF prog-id=199 op=LOAD Jan 23 23:35:04.391000 audit[4808]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcc92c498 a2=74 a3=95 items=0 ppid=4633 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.391000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 23:35:04.391000 audit: BPF prog-id=199 op=UNLOAD Jan 23 23:35:04.391000 audit[4808]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4633 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.391000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 23:35:04.391000 audit: BPF prog-id=200 op=LOAD Jan 23 23:35:04.391000 audit[4808]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcc92c4f8 a2=94 a3=2 items=0 ppid=4633 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.391000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 23:35:04.391000 audit: BPF prog-id=200 op=UNLOAD Jan 23 23:35:04.391000 audit[4808]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4633 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.391000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 23:35:04.391000 audit: BPF prog-id=201 op=LOAD Jan 23 23:35:04.391000 audit[4808]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcc92c378 a2=40 a3=ffffcc92c3a8 items=0 ppid=4633 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.391000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 23:35:04.391000 audit: BPF prog-id=201 op=UNLOAD Jan 23 23:35:04.391000 audit[4808]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffcc92c3a8 items=0 ppid=4633 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.391000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 23:35:04.391000 audit: BPF prog-id=202 op=LOAD Jan 23 23:35:04.391000 audit[4808]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcc92c4c8 a2=94 a3=b7 items=0 ppid=4633 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.391000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 23:35:04.394000 audit: BPF prog-id=202 op=UNLOAD Jan 23 23:35:04.394000 audit[4808]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4633 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.394000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 23:35:04.398000 audit: BPF prog-id=203 op=LOAD Jan 23 23:35:04.398000 audit[4808]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcc92bb78 a2=94 a3=2 items=0 ppid=4633 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.398000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 23:35:04.398000 audit: BPF prog-id=203 op=UNLOAD Jan 23 23:35:04.398000 audit[4808]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4633 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.398000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 23:35:04.398000 audit: BPF prog-id=204 op=LOAD Jan 23 23:35:04.398000 audit[4808]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcc92bd08 a2=94 a3=30 items=0 ppid=4633 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.398000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 23:35:04.400679 (udev-worker)[4504]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:35:04.407000 audit: BPF prog-id=205 op=LOAD Jan 23 23:35:04.407000 audit[4812]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe04bd4a8 a2=98 a3=ffffe04bd498 items=0 ppid=4633 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.407000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 23:35:04.408000 audit: BPF prog-id=205 op=UNLOAD Jan 23 23:35:04.408000 audit[4812]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe04bd478 a3=0 items=0 ppid=4633 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.408000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 23:35:04.408000 audit: BPF prog-id=206 op=LOAD Jan 23 23:35:04.408000 audit[4812]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe04bd138 a2=74 a3=95 items=0 ppid=4633 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.408000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 23:35:04.409000 audit: BPF prog-id=206 op=UNLOAD Jan 23 23:35:04.409000 audit[4812]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4633 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.409000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 23:35:04.409000 audit: BPF prog-id=207 op=LOAD Jan 23 23:35:04.409000 audit[4812]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe04bd198 a2=94 a3=2 items=0 ppid=4633 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.409000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 23:35:04.409000 audit: BPF prog-id=207 op=UNLOAD Jan 23 23:35:04.409000 audit[4812]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4633 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.409000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 23:35:04.633000 audit: BPF prog-id=208 op=LOAD Jan 23 23:35:04.633000 audit[4812]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe04bd158 a2=40 a3=ffffe04bd188 items=0 ppid=4633 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.633000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 23:35:04.634000 audit: BPF prog-id=208 op=UNLOAD Jan 23 23:35:04.634000 audit[4812]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffe04bd188 items=0 ppid=4633 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.634000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 23:35:04.652000 audit: BPF prog-id=209 op=LOAD Jan 23 23:35:04.652000 audit[4812]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe04bd168 a2=94 a3=4 items=0 ppid=4633 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.652000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 23:35:04.653000 audit: BPF prog-id=209 op=UNLOAD Jan 23 23:35:04.653000 audit[4812]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4633 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.653000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 23:35:04.654000 audit: BPF prog-id=210 op=LOAD Jan 23 23:35:04.654000 audit[4812]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe04bcfa8 a2=94 a3=5 items=0 ppid=4633 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.654000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 23:35:04.654000 audit: BPF prog-id=210 op=UNLOAD Jan 23 23:35:04.654000 audit[4812]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4633 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.654000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 23:35:04.654000 audit: BPF prog-id=211 op=LOAD Jan 23 23:35:04.654000 audit[4812]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe04bd1d8 a2=94 a3=6 items=0 ppid=4633 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.654000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 23:35:04.654000 audit: BPF prog-id=211 op=UNLOAD Jan 23 23:35:04.654000 audit[4812]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4633 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.654000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 23:35:04.655000 audit: BPF prog-id=212 op=LOAD Jan 23 23:35:04.655000 audit[4812]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe04bc9a8 a2=94 a3=83 items=0 ppid=4633 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.655000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 23:35:04.657000 audit: BPF prog-id=213 op=LOAD Jan 23 23:35:04.657000 audit[4812]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffe04bc768 a2=94 a3=2 items=0 ppid=4633 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.657000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 23:35:04.657000 audit: BPF prog-id=213 op=UNLOAD Jan 23 23:35:04.657000 audit[4812]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4633 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.657000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 23:35:04.658000 audit: BPF prog-id=212 op=UNLOAD Jan 23 23:35:04.658000 audit[4812]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=34d26620 a3=34d19b00 items=0 ppid=4633 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.658000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 23:35:04.670000 audit: BPF prog-id=204 op=UNLOAD Jan 23 23:35:04.670000 audit[4633]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4001158200 a2=0 a3=0 items=0 ppid=4597 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.670000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 23 23:35:04.769000 audit[4834]: NETFILTER_CFG table=nat:123 family=2 entries=15 op=nft_register_chain pid=4834 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 23:35:04.769000 audit[4834]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffe2c53fc0 a2=0 a3=ffffb29f7fa8 items=0 ppid=4633 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.769000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 23:35:04.770000 audit[4836]: NETFILTER_CFG table=mangle:124 family=2 entries=16 op=nft_register_chain pid=4836 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 23:35:04.770000 audit[4836]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffff222400 a2=0 a3=ffffac2c5fa8 items=0 ppid=4633 pid=4836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.770000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 23:35:04.788000 audit[4835]: NETFILTER_CFG table=raw:125 family=2 entries=21 op=nft_register_chain pid=4835 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 23:35:04.788000 audit[4835]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffc19bbf30 a2=0 a3=ffff939e0fa8 items=0 ppid=4633 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.788000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 23:35:04.797000 audit[4838]: NETFILTER_CFG table=filter:126 family=2 entries=94 op=nft_register_chain pid=4838 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 23:35:04.797000 audit[4838]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=fffff3cb72c0 a2=0 a3=ffff81cecfa8 items=0 ppid=4633 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:04.797000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 23:35:05.057876 kubelet[3413]: E0123 23:35:05.057660 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f9b44c667-h6s7s" podUID="24e24657-8e54-4ae7-acdb-2eda45aabbdf" Jan 23 23:35:05.608160 containerd[1972]: time="2026-01-23T23:35:05.608089690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5rq9p,Uid:ff001061-ac71-4e8e-befd-96b93fbb4d6b,Namespace:kube-system,Attempt:0,}" Jan 23 23:35:05.608729 containerd[1972]: time="2026-01-23T23:35:05.608493862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57b65b9-26wjv,Uid:6c551026-ffac-43ea-999f-0823acd8fbb1,Namespace:calico-system,Attempt:0,}" Jan 23 23:35:05.608729 containerd[1972]: time="2026-01-23T23:35:05.608629714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b9c97bcf-96pwk,Uid:1b0b2743-62ac-460d-ba7c-52d229e3b875,Namespace:calico-apiserver,Attempt:0,}" Jan 23 23:35:05.963991 (udev-worker)[4813]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:35:05.965986 systemd-networkd[1781]: calibbb2ba1024b: Link UP Jan 23 23:35:05.967315 systemd-networkd[1781]: calibbb2ba1024b: Gained carrier Jan 23 23:35:05.999419 systemd-networkd[1781]: vxlan.calico: Gained IPv6LL Jan 23 23:35:06.005949 containerd[1972]: 2026-01-23 23:35:05.749 [INFO][4876] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--96pwk-eth0 calico-apiserver-68b9c97bcf- calico-apiserver 1b0b2743-62ac-460d-ba7c-52d229e3b875 868 0 2026-01-23 23:34:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68b9c97bcf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-100 calico-apiserver-68b9c97bcf-96pwk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibbb2ba1024b [] [] }} ContainerID="9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59" Namespace="calico-apiserver" Pod="calico-apiserver-68b9c97bcf-96pwk" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--96pwk-" Jan 23 23:35:06.005949 containerd[1972]: 2026-01-23 23:35:05.750 [INFO][4876] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59" Namespace="calico-apiserver" Pod="calico-apiserver-68b9c97bcf-96pwk" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--96pwk-eth0" Jan 23 23:35:06.005949 containerd[1972]: 2026-01-23 23:35:05.852 [INFO][4891] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59" HandleID="k8s-pod-network.9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59" Workload="ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--96pwk-eth0" Jan 23 23:35:06.006351 containerd[1972]: 2026-01-23 23:35:05.853 [INFO][4891] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59" HandleID="k8s-pod-network.9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59" Workload="ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--96pwk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400036f650), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-100", "pod":"calico-apiserver-68b9c97bcf-96pwk", "timestamp":"2026-01-23 23:35:05.852411935 +0000 UTC"}, Hostname:"ip-172-31-23-100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:35:06.006351 containerd[1972]: 2026-01-23 23:35:05.853 [INFO][4891] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:35:06.006351 containerd[1972]: 2026-01-23 23:35:05.853 [INFO][4891] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:35:06.006351 containerd[1972]: 2026-01-23 23:35:05.853 [INFO][4891] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-100' Jan 23 23:35:06.006351 containerd[1972]: 2026-01-23 23:35:05.874 [INFO][4891] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59" host="ip-172-31-23-100" Jan 23 23:35:06.006351 containerd[1972]: 2026-01-23 23:35:05.890 [INFO][4891] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-100" Jan 23 23:35:06.006351 containerd[1972]: 2026-01-23 23:35:05.906 [INFO][4891] ipam/ipam.go 511: Trying affinity for 192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:06.006351 containerd[1972]: 2026-01-23 23:35:05.916 [INFO][4891] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:06.006351 containerd[1972]: 2026-01-23 23:35:05.923 [INFO][4891] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:06.007592 containerd[1972]: 2026-01-23 23:35:05.924 [INFO][4891] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59" host="ip-172-31-23-100" Jan 23 23:35:06.007592 containerd[1972]: 2026-01-23 23:35:05.928 [INFO][4891] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59 Jan 23 23:35:06.007592 containerd[1972]: 2026-01-23 23:35:05.939 [INFO][4891] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59" host="ip-172-31-23-100" Jan 23 23:35:06.007592 containerd[1972]: 2026-01-23 23:35:05.951 [INFO][4891] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.2/26] block=192.168.22.0/26 handle="k8s-pod-network.9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59" host="ip-172-31-23-100" Jan 23 23:35:06.007592 containerd[1972]: 2026-01-23 23:35:05.952 [INFO][4891] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.2/26] handle="k8s-pod-network.9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59" host="ip-172-31-23-100" Jan 23 23:35:06.007592 containerd[1972]: 2026-01-23 23:35:05.952 [INFO][4891] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:35:06.007592 containerd[1972]: 2026-01-23 23:35:05.952 [INFO][4891] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.2/26] IPv6=[] ContainerID="9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59" HandleID="k8s-pod-network.9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59" Workload="ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--96pwk-eth0" Jan 23 23:35:06.009267 containerd[1972]: 2026-01-23 23:35:05.958 [INFO][4876] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59" Namespace="calico-apiserver" Pod="calico-apiserver-68b9c97bcf-96pwk" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--96pwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--96pwk-eth0", GenerateName:"calico-apiserver-68b9c97bcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b0b2743-62ac-460d-ba7c-52d229e3b875", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 34, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68b9c97bcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-100", ContainerID:"", Pod:"calico-apiserver-68b9c97bcf-96pwk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibbb2ba1024b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:35:06.009416 containerd[1972]: 2026-01-23 23:35:05.958 [INFO][4876] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.2/32] ContainerID="9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59" Namespace="calico-apiserver" Pod="calico-apiserver-68b9c97bcf-96pwk" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--96pwk-eth0" Jan 23 23:35:06.009416 containerd[1972]: 2026-01-23 23:35:05.958 [INFO][4876] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibbb2ba1024b ContainerID="9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59" Namespace="calico-apiserver" Pod="calico-apiserver-68b9c97bcf-96pwk" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--96pwk-eth0" Jan 23 23:35:06.009416 containerd[1972]: 2026-01-23 23:35:05.962 [INFO][4876] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59" Namespace="calico-apiserver" Pod="calico-apiserver-68b9c97bcf-96pwk" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--96pwk-eth0" Jan 23 23:35:06.009672 containerd[1972]: 2026-01-23 23:35:05.963 [INFO][4876] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59" Namespace="calico-apiserver" Pod="calico-apiserver-68b9c97bcf-96pwk" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--96pwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--96pwk-eth0", GenerateName:"calico-apiserver-68b9c97bcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b0b2743-62ac-460d-ba7c-52d229e3b875", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 34, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68b9c97bcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-100", ContainerID:"9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59", Pod:"calico-apiserver-68b9c97bcf-96pwk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibbb2ba1024b", MAC:"8e:bc:01:42:f2:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:35:06.009801 containerd[1972]: 2026-01-23 23:35:06.000 [INFO][4876] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59" Namespace="calico-apiserver" Pod="calico-apiserver-68b9c97bcf-96pwk" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--96pwk-eth0" Jan 23 23:35:06.086569 systemd-networkd[1781]: cali65be505e4e4: Link UP Jan 23 23:35:06.090405 systemd-networkd[1781]: cali65be505e4e4: Gained carrier Jan 23 23:35:06.118000 audit[4935]: NETFILTER_CFG table=filter:127 family=2 entries=50 op=nft_register_chain pid=4935 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 23:35:06.121835 kernel: kauditd_printk_skb: 231 callbacks suppressed Jan 23 23:35:06.121992 kernel: audit: type=1325 audit(1769211306.118:657): table=filter:127 family=2 entries=50 op=nft_register_chain pid=4935 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 23:35:06.118000 audit[4935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28208 a0=3 a1=ffffedd3dec0 a2=0 a3=ffffabc09fa8 items=0 ppid=4633 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.135537 kernel: audit: type=1300 audit(1769211306.118:657): arch=c00000b7 syscall=211 success=yes exit=28208 a0=3 a1=ffffedd3dec0 a2=0 a3=ffffabc09fa8 items=0 ppid=4633 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.118000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 23:35:06.145992 kernel: audit: type=1327 audit(1769211306.118:657): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 23:35:06.147871 containerd[1972]: time="2026-01-23T23:35:06.147744033Z" level=info msg="connecting to shim 9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59" address="unix:///run/containerd/s/e6e08cd6a2769e02aa7ff38b5e6f4bb9e52e7e9de6f544205ea857b9a2ad0532" namespace=k8s.io protocol=ttrpc version=3 Jan 23 23:35:06.149963 containerd[1972]: 2026-01-23 23:35:05.827 [INFO][4857] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--100-k8s-calico--kube--controllers--57b65b9--26wjv-eth0 calico-kube-controllers-57b65b9- calico-system 6c551026-ffac-43ea-999f-0823acd8fbb1 854 0 2026-01-23 23:34:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57b65b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-23-100 calico-kube-controllers-57b65b9-26wjv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali65be505e4e4 [] [] }} ContainerID="033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97" Namespace="calico-system" Pod="calico-kube-controllers-57b65b9-26wjv" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--kube--controllers--57b65b9--26wjv-" Jan 23 23:35:06.149963 containerd[1972]: 2026-01-23 23:35:05.827 [INFO][4857] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97" Namespace="calico-system" Pod="calico-kube-controllers-57b65b9-26wjv" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--kube--controllers--57b65b9--26wjv-eth0" Jan 23 23:35:06.149963 containerd[1972]: 2026-01-23 23:35:05.933 [INFO][4905] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97" HandleID="k8s-pod-network.033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97" Workload="ip--172--31--23--100-k8s-calico--kube--controllers--57b65b9--26wjv-eth0" Jan 23 23:35:06.150283 containerd[1972]: 2026-01-23 23:35:05.933 [INFO][4905] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97" HandleID="k8s-pod-network.033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97" Workload="ip--172--31--23--100-k8s-calico--kube--controllers--57b65b9--26wjv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab3a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-100", "pod":"calico-kube-controllers-57b65b9-26wjv", "timestamp":"2026-01-23 23:35:05.933073859 +0000 UTC"}, Hostname:"ip-172-31-23-100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:35:06.150283 containerd[1972]: 2026-01-23 23:35:05.933 [INFO][4905] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:35:06.150283 containerd[1972]: 2026-01-23 23:35:05.952 [INFO][4905] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:35:06.150283 containerd[1972]: 2026-01-23 23:35:05.952 [INFO][4905] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-100' Jan 23 23:35:06.150283 containerd[1972]: 2026-01-23 23:35:05.981 [INFO][4905] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97" host="ip-172-31-23-100" Jan 23 23:35:06.150283 containerd[1972]: 2026-01-23 23:35:06.004 [INFO][4905] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-100" Jan 23 23:35:06.150283 containerd[1972]: 2026-01-23 23:35:06.018 [INFO][4905] ipam/ipam.go 511: Trying affinity for 192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:06.150283 containerd[1972]: 2026-01-23 23:35:06.023 [INFO][4905] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:06.150283 containerd[1972]: 2026-01-23 23:35:06.030 [INFO][4905] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:06.150779 containerd[1972]: 2026-01-23 23:35:06.030 [INFO][4905] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97" host="ip-172-31-23-100" Jan 23 23:35:06.150779 containerd[1972]: 2026-01-23 23:35:06.034 [INFO][4905] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97 Jan 23 23:35:06.150779 containerd[1972]: 2026-01-23 23:35:06.055 [INFO][4905] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97" host="ip-172-31-23-100" Jan 23 23:35:06.150779 containerd[1972]: 2026-01-23 23:35:06.071 [INFO][4905] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.3/26] block=192.168.22.0/26 handle="k8s-pod-network.033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97" host="ip-172-31-23-100" Jan 23 23:35:06.150779 containerd[1972]: 2026-01-23 23:35:06.071 [INFO][4905] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.3/26] handle="k8s-pod-network.033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97" host="ip-172-31-23-100" Jan 23 23:35:06.150779 containerd[1972]: 2026-01-23 23:35:06.071 [INFO][4905] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:35:06.150779 containerd[1972]: 2026-01-23 23:35:06.071 [INFO][4905] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.3/26] IPv6=[] ContainerID="033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97" HandleID="k8s-pod-network.033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97" Workload="ip--172--31--23--100-k8s-calico--kube--controllers--57b65b9--26wjv-eth0" Jan 23 23:35:06.153795 containerd[1972]: 2026-01-23 23:35:06.076 [INFO][4857] cni-plugin/k8s.go 418: Populated endpoint ContainerID="033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97" Namespace="calico-system" Pod="calico-kube-controllers-57b65b9-26wjv" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--kube--controllers--57b65b9--26wjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--100-k8s-calico--kube--controllers--57b65b9--26wjv-eth0", GenerateName:"calico-kube-controllers-57b65b9-", Namespace:"calico-system", SelfLink:"", UID:"6c551026-ffac-43ea-999f-0823acd8fbb1", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 34, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57b65b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-100", ContainerID:"", Pod:"calico-kube-controllers-57b65b9-26wjv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali65be505e4e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:35:06.154424 containerd[1972]: 2026-01-23 23:35:06.077 [INFO][4857] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.3/32] ContainerID="033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97" Namespace="calico-system" Pod="calico-kube-controllers-57b65b9-26wjv" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--kube--controllers--57b65b9--26wjv-eth0" Jan 23 23:35:06.154424 containerd[1972]: 2026-01-23 23:35:06.077 [INFO][4857] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65be505e4e4 ContainerID="033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97" Namespace="calico-system" Pod="calico-kube-controllers-57b65b9-26wjv" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--kube--controllers--57b65b9--26wjv-eth0" Jan 23 23:35:06.154424 containerd[1972]: 2026-01-23 23:35:06.090 [INFO][4857] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97" Namespace="calico-system" Pod="calico-kube-controllers-57b65b9-26wjv" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--kube--controllers--57b65b9--26wjv-eth0" Jan 23 23:35:06.155476 containerd[1972]: 2026-01-23 23:35:06.108 [INFO][4857] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97" Namespace="calico-system" Pod="calico-kube-controllers-57b65b9-26wjv" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--kube--controllers--57b65b9--26wjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--100-k8s-calico--kube--controllers--57b65b9--26wjv-eth0", GenerateName:"calico-kube-controllers-57b65b9-", Namespace:"calico-system", SelfLink:"", UID:"6c551026-ffac-43ea-999f-0823acd8fbb1", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 34, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57b65b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-100", ContainerID:"033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97", Pod:"calico-kube-controllers-57b65b9-26wjv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali65be505e4e4", MAC:"66:82:bd:3d:ad:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:35:06.155870 containerd[1972]: 2026-01-23 23:35:06.139 [INFO][4857] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97" Namespace="calico-system" Pod="calico-kube-controllers-57b65b9-26wjv" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--kube--controllers--57b65b9--26wjv-eth0" Jan 23 23:35:06.225000 audit[4978]: NETFILTER_CFG table=filter:128 family=2 entries=40 op=nft_register_chain pid=4978 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 23:35:06.225000 audit[4978]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20764 a0=3 a1=fffff6f33aa0 a2=0 a3=ffff96fd6fa8 items=0 ppid=4633 pid=4978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.244168 kernel: audit: type=1325 audit(1769211306.225:658): table=filter:128 family=2 entries=40 op=nft_register_chain pid=4978 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 23:35:06.244320 kernel: audit: type=1300 audit(1769211306.225:658): arch=c00000b7 syscall=211 success=yes exit=20764 a0=3 a1=fffff6f33aa0 a2=0 a3=ffff96fd6fa8 items=0 ppid=4633 pid=4978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.247304 systemd[1]: Started cri-containerd-9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59.scope - libcontainer container 9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59. Jan 23 23:35:06.225000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 23:35:06.257513 kernel: audit: type=1327 audit(1769211306.225:658): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 23:35:06.266387 containerd[1972]: time="2026-01-23T23:35:06.266211129Z" level=info msg="connecting to shim 033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97" address="unix:///run/containerd/s/6300c6f534db5e7e6c39f61231211e6246cc817ee10c1120128b9cc3e4634986" namespace=k8s.io protocol=ttrpc version=3 Jan 23 23:35:06.298058 systemd-networkd[1781]: califf33d86163e: Link UP Jan 23 23:35:06.298628 systemd-networkd[1781]: califf33d86163e: Gained carrier Jan 23 23:35:06.355942 containerd[1972]: 2026-01-23 23:35:05.821 [INFO][4853] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--100-k8s-coredns--674b8bbfcf--5rq9p-eth0 coredns-674b8bbfcf- kube-system ff001061-ac71-4e8e-befd-96b93fbb4d6b 859 0 2026-01-23 23:34:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-100 coredns-674b8bbfcf-5rq9p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califf33d86163e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3" Namespace="kube-system" Pod="coredns-674b8bbfcf-5rq9p" WorkloadEndpoint="ip--172--31--23--100-k8s-coredns--674b8bbfcf--5rq9p-" Jan 23 23:35:06.355942 containerd[1972]: 2026-01-23 23:35:05.823 [INFO][4853] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3" Namespace="kube-system" Pod="coredns-674b8bbfcf-5rq9p" WorkloadEndpoint="ip--172--31--23--100-k8s-coredns--674b8bbfcf--5rq9p-eth0" Jan 23 23:35:06.355942 containerd[1972]: 2026-01-23 23:35:05.935 [INFO][4900] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3" HandleID="k8s-pod-network.2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3" Workload="ip--172--31--23--100-k8s-coredns--674b8bbfcf--5rq9p-eth0" Jan 23 23:35:06.356235 containerd[1972]: 2026-01-23 23:35:05.936 [INFO][4900] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3" HandleID="k8s-pod-network.2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3" Workload="ip--172--31--23--100-k8s-coredns--674b8bbfcf--5rq9p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030c370), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-100", "pod":"coredns-674b8bbfcf-5rq9p", "timestamp":"2026-01-23 23:35:05.935772971 +0000 UTC"}, Hostname:"ip-172-31-23-100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:35:06.356235 containerd[1972]: 2026-01-23 23:35:05.936 [INFO][4900] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:35:06.356235 containerd[1972]: 2026-01-23 23:35:06.072 [INFO][4900] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:35:06.356235 containerd[1972]: 2026-01-23 23:35:06.073 [INFO][4900] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-100' Jan 23 23:35:06.356235 containerd[1972]: 2026-01-23 23:35:06.108 [INFO][4900] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3" host="ip-172-31-23-100" Jan 23 23:35:06.356235 containerd[1972]: 2026-01-23 23:35:06.130 [INFO][4900] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-100" Jan 23 23:35:06.356235 containerd[1972]: 2026-01-23 23:35:06.172 [INFO][4900] ipam/ipam.go 511: Trying affinity for 192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:06.356235 containerd[1972]: 2026-01-23 23:35:06.183 [INFO][4900] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:06.356235 containerd[1972]: 2026-01-23 23:35:06.193 [INFO][4900] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:06.356736 containerd[1972]: 2026-01-23 23:35:06.193 [INFO][4900] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3" host="ip-172-31-23-100" Jan 23 23:35:06.356736 containerd[1972]: 2026-01-23 23:35:06.202 [INFO][4900] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3 Jan 23 23:35:06.356736 containerd[1972]: 2026-01-23 23:35:06.221 [INFO][4900] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3" host="ip-172-31-23-100" Jan 23 23:35:06.356736 containerd[1972]: 2026-01-23 23:35:06.264 [INFO][4900] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.4/26] block=192.168.22.0/26 handle="k8s-pod-network.2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3" host="ip-172-31-23-100" Jan 23 23:35:06.356736 containerd[1972]: 2026-01-23 23:35:06.265 [INFO][4900] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.4/26] handle="k8s-pod-network.2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3" host="ip-172-31-23-100" Jan 23 23:35:06.356736 containerd[1972]: 2026-01-23 23:35:06.266 [INFO][4900] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:35:06.356736 containerd[1972]: 2026-01-23 23:35:06.266 [INFO][4900] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.4/26] IPv6=[] ContainerID="2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3" HandleID="k8s-pod-network.2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3" Workload="ip--172--31--23--100-k8s-coredns--674b8bbfcf--5rq9p-eth0" Jan 23 23:35:06.357152 containerd[1972]: 2026-01-23 23:35:06.283 [INFO][4853] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3" Namespace="kube-system" Pod="coredns-674b8bbfcf-5rq9p" WorkloadEndpoint="ip--172--31--23--100-k8s-coredns--674b8bbfcf--5rq9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--100-k8s-coredns--674b8bbfcf--5rq9p-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ff001061-ac71-4e8e-befd-96b93fbb4d6b", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 34, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-100", ContainerID:"", Pod:"coredns-674b8bbfcf-5rq9p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf33d86163e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:35:06.357152 containerd[1972]: 2026-01-23 23:35:06.283 [INFO][4853] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.4/32] ContainerID="2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3" Namespace="kube-system" Pod="coredns-674b8bbfcf-5rq9p" WorkloadEndpoint="ip--172--31--23--100-k8s-coredns--674b8bbfcf--5rq9p-eth0" Jan 23 23:35:06.357152 containerd[1972]: 2026-01-23 23:35:06.283 [INFO][4853] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf33d86163e ContainerID="2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3" Namespace="kube-system" Pod="coredns-674b8bbfcf-5rq9p" WorkloadEndpoint="ip--172--31--23--100-k8s-coredns--674b8bbfcf--5rq9p-eth0" Jan 23 23:35:06.357152 containerd[1972]: 2026-01-23 23:35:06.298 [INFO][4853] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3" Namespace="kube-system" Pod="coredns-674b8bbfcf-5rq9p" WorkloadEndpoint="ip--172--31--23--100-k8s-coredns--674b8bbfcf--5rq9p-eth0" Jan 23 23:35:06.357152 containerd[1972]: 2026-01-23 23:35:06.298 [INFO][4853] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3" Namespace="kube-system" Pod="coredns-674b8bbfcf-5rq9p" WorkloadEndpoint="ip--172--31--23--100-k8s-coredns--674b8bbfcf--5rq9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--100-k8s-coredns--674b8bbfcf--5rq9p-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ff001061-ac71-4e8e-befd-96b93fbb4d6b", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 34, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-100", ContainerID:"2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3", Pod:"coredns-674b8bbfcf-5rq9p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf33d86163e", MAC:"fe:36:25:83:76:20", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:35:06.357152 containerd[1972]: 2026-01-23 23:35:06.331 [INFO][4853] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3" Namespace="kube-system" Pod="coredns-674b8bbfcf-5rq9p" WorkloadEndpoint="ip--172--31--23--100-k8s-coredns--674b8bbfcf--5rq9p-eth0" Jan 23 23:35:06.360000 audit: BPF prog-id=214 op=LOAD Jan 23 23:35:06.363000 audit: BPF prog-id=215 op=LOAD Jan 23 23:35:06.366759 kernel: audit: type=1334 audit(1769211306.360:659): prog-id=214 op=LOAD Jan 23 23:35:06.366861 kernel: audit: type=1334 audit(1769211306.363:660): prog-id=215 op=LOAD Jan 23 23:35:06.363000 audit[4955]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4940 pid=4955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.375266 kernel: audit: type=1300 audit(1769211306.363:660): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4940 pid=4955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.386429 kernel: audit: type=1327 audit(1769211306.363:660): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613532343539336639636665366165303530613166316232306664 Jan 23 23:35:06.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613532343539336639636665366165303530613166316232306664 Jan 23 23:35:06.363000 audit: BPF prog-id=215 op=UNLOAD Jan 23 23:35:06.363000 audit[4955]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4940 pid=4955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613532343539336639636665366165303530613166316232306664 Jan 23 23:35:06.364000 audit: BPF prog-id=216 op=LOAD Jan 23 23:35:06.364000 audit[4955]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4940 pid=4955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613532343539336639636665366165303530613166316232306664 Jan 23 23:35:06.367000 audit: BPF prog-id=217 op=LOAD Jan 23 23:35:06.367000 audit[4955]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4940 pid=4955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613532343539336639636665366165303530613166316232306664 Jan 23 23:35:06.374000 audit: BPF prog-id=217 op=UNLOAD Jan 23 23:35:06.374000 audit[4955]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4940 pid=4955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613532343539336639636665366165303530613166316232306664 Jan 23 23:35:06.374000 audit: BPF prog-id=216 op=UNLOAD Jan 23 23:35:06.374000 audit[4955]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4940 pid=4955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613532343539336639636665366165303530613166316232306664 Jan 23 23:35:06.374000 audit: BPF prog-id=218 op=LOAD Jan 23 23:35:06.374000 audit[4955]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4940 pid=4955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613532343539336639636665366165303530613166316232306664 Jan 23 23:35:06.423000 audit[5028]: NETFILTER_CFG table=filter:129 family=2 entries=50 op=nft_register_chain pid=5028 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 23:35:06.423000 audit[5028]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24928 a0=3 a1=ffffe3313a40 a2=0 a3=ffff9a15cfa8 items=0 ppid=4633 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.423000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 23:35:06.426579 systemd[1]: Started cri-containerd-033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97.scope - libcontainer container 033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97. Jan 23 23:35:06.462387 containerd[1972]: time="2026-01-23T23:35:06.462116782Z" level=info msg="connecting to shim 2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3" address="unix:///run/containerd/s/f8122c0dd40e5a83d824a43f8abb4ef81dc18444bf96cc099ab36dcd50d7ad69" namespace=k8s.io protocol=ttrpc version=3 Jan 23 23:35:06.488000 audit: BPF prog-id=219 op=LOAD Jan 23 23:35:06.499000 audit: BPF prog-id=220 op=LOAD Jan 23 23:35:06.499000 audit[5009]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=4989 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033333732363438326432386339316237323134306361383830323131 Jan 23 23:35:06.499000 audit: BPF prog-id=220 op=UNLOAD Jan 23 23:35:06.499000 audit[5009]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4989 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033333732363438326432386339316237323134306361383830323131 Jan 23 23:35:06.500000 audit: BPF prog-id=221 op=LOAD Jan 23 23:35:06.500000 audit[5009]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4989 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.500000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033333732363438326432386339316237323134306361383830323131 Jan 23 23:35:06.502000 audit: BPF prog-id=222 op=LOAD Jan 23 23:35:06.502000 audit[5009]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4989 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033333732363438326432386339316237323134306361383830323131 Jan 23 23:35:06.503000 audit: BPF prog-id=222 op=UNLOAD Jan 23 23:35:06.503000 audit[5009]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4989 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033333732363438326432386339316237323134306361383830323131 Jan 23 23:35:06.504000 audit: BPF prog-id=221 op=UNLOAD Jan 23 23:35:06.504000 audit[5009]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4989 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033333732363438326432386339316237323134306361383830323131 Jan 23 23:35:06.504000 audit: BPF prog-id=223 op=LOAD Jan 23 23:35:06.504000 audit[5009]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=4989 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033333732363438326432386339316237323134306361383830323131 Jan 23 23:35:06.559655 systemd[1]: Started cri-containerd-2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3.scope - libcontainer container 2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3. Jan 23 23:35:06.586409 containerd[1972]: time="2026-01-23T23:35:06.586248659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b9c97bcf-96pwk,Uid:1b0b2743-62ac-460d-ba7c-52d229e3b875,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9ea524593f9cfe6ae050a1f1b20fdd6f713db07e4cc88b3bd79ebfa36f761e59\"" Jan 23 23:35:06.599473 containerd[1972]: time="2026-01-23T23:35:06.599331275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:35:06.617938 containerd[1972]: time="2026-01-23T23:35:06.615536135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-778cf5d48d-s7h4h,Uid:64fe3344-4e80-444f-ba70-e34e02720a15,Namespace:calico-apiserver,Attempt:0,}" Jan 23 23:35:06.656833 containerd[1972]: time="2026-01-23T23:35:06.655340687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57b65b9-26wjv,Uid:6c551026-ffac-43ea-999f-0823acd8fbb1,Namespace:calico-system,Attempt:0,} returns sandbox id \"033726482d28c91b72140ca88021106507801e8aabe80e7fc5411493f6797a97\"" Jan 23 23:35:06.675000 audit: BPF prog-id=224 op=LOAD Jan 23 23:35:06.680000 audit: BPF prog-id=225 op=LOAD Jan 23 23:35:06.680000 audit[5058]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5045 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262363838386338303065306666343431366633353930343530303732 Jan 23 23:35:06.681000 audit: BPF prog-id=225 op=UNLOAD Jan 23 23:35:06.681000 audit[5058]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5045 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262363838386338303065306666343431366633353930343530303732 Jan 23 23:35:06.681000 audit: BPF prog-id=226 op=LOAD Jan 23 23:35:06.681000 audit[5058]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5045 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262363838386338303065306666343431366633353930343530303732 Jan 23 23:35:06.682000 audit: BPF prog-id=227 op=LOAD Jan 23 23:35:06.682000 audit[5058]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5045 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262363838386338303065306666343431366633353930343530303732 Jan 23 23:35:06.682000 audit: BPF prog-id=227 op=UNLOAD Jan 23 23:35:06.682000 audit[5058]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5045 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262363838386338303065306666343431366633353930343530303732 Jan 23 23:35:06.683000 audit: BPF prog-id=226 op=UNLOAD Jan 23 23:35:06.683000 audit[5058]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5045 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262363838386338303065306666343431366633353930343530303732 Jan 23 23:35:06.683000 audit: BPF prog-id=228 op=LOAD Jan 23 23:35:06.683000 audit[5058]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5045 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262363838386338303065306666343431366633353930343530303732 Jan 23 23:35:06.748415 containerd[1972]: time="2026-01-23T23:35:06.748270715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5rq9p,Uid:ff001061-ac71-4e8e-befd-96b93fbb4d6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3\"" Jan 23 23:35:06.762259 containerd[1972]: time="2026-01-23T23:35:06.761863428Z" level=info msg="CreateContainer within sandbox \"2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:35:06.799950 containerd[1972]: time="2026-01-23T23:35:06.799822080Z" level=info msg="Container 45d3ab2837bf52a780d1b94d1c99ff64b1489dc4ba36b2b3e6ce5ced883e8c35: CDI devices from CRI Config.CDIDevices: []" Jan 23 23:35:06.821746 containerd[1972]: time="2026-01-23T23:35:06.821690808Z" level=info msg="CreateContainer within sandbox \"2b6888c800e0ff4416f3590450072641f7468893e801a8d5155ac1efc814b3c3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"45d3ab2837bf52a780d1b94d1c99ff64b1489dc4ba36b2b3e6ce5ced883e8c35\"" Jan 23 23:35:06.823298 containerd[1972]: time="2026-01-23T23:35:06.823126656Z" level=info msg="StartContainer for \"45d3ab2837bf52a780d1b94d1c99ff64b1489dc4ba36b2b3e6ce5ced883e8c35\"" Jan 23 23:35:06.826116 containerd[1972]: time="2026-01-23T23:35:06.826005648Z" level=info msg="connecting to shim 45d3ab2837bf52a780d1b94d1c99ff64b1489dc4ba36b2b3e6ce5ced883e8c35" address="unix:///run/containerd/s/f8122c0dd40e5a83d824a43f8abb4ef81dc18444bf96cc099ab36dcd50d7ad69" protocol=ttrpc version=3 Jan 23 23:35:06.878430 systemd[1]: Started cri-containerd-45d3ab2837bf52a780d1b94d1c99ff64b1489dc4ba36b2b3e6ce5ced883e8c35.scope - libcontainer container 45d3ab2837bf52a780d1b94d1c99ff64b1489dc4ba36b2b3e6ce5ced883e8c35. Jan 23 23:35:06.913014 containerd[1972]: time="2026-01-23T23:35:06.912947076Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:06.915727 containerd[1972]: time="2026-01-23T23:35:06.915370920Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:35:06.915727 containerd[1972]: time="2026-01-23T23:35:06.915453132Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:06.916353 kubelet[3413]: E0123 23:35:06.916089 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:35:06.916353 kubelet[3413]: E0123 23:35:06.916148 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:35:06.919119 kubelet[3413]: E0123 23:35:06.918306 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wlxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68b9c97bcf-96pwk_calico-apiserver(1b0b2743-62ac-460d-ba7c-52d229e3b875): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:06.919852 kubelet[3413]: E0123 23:35:06.919741 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b9c97bcf-96pwk" podUID="1b0b2743-62ac-460d-ba7c-52d229e3b875" Jan 23 23:35:06.920293 containerd[1972]: time="2026-01-23T23:35:06.920252496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:35:06.947000 audit: BPF prog-id=229 op=LOAD Jan 23 23:35:06.949000 audit: BPF prog-id=230 op=LOAD Jan 23 23:35:06.949000 audit[5115]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5045 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435643361623238333762663532613738306431623934643163393966 Jan 23 23:35:06.949000 audit: BPF prog-id=230 op=UNLOAD Jan 23 23:35:06.949000 audit[5115]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5045 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435643361623238333762663532613738306431623934643163393966 Jan 23 23:35:06.949000 audit: BPF prog-id=231 op=LOAD Jan 23 23:35:06.949000 audit[5115]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5045 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435643361623238333762663532613738306431623934643163393966 Jan 23 23:35:06.950000 audit: BPF prog-id=232 op=LOAD Jan 23 23:35:06.950000 audit[5115]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5045 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435643361623238333762663532613738306431623934643163393966 Jan 23 23:35:06.950000 audit: BPF prog-id=232 op=UNLOAD Jan 23 23:35:06.950000 audit[5115]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5045 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435643361623238333762663532613738306431623934643163393966 Jan 23 23:35:06.950000 audit: BPF prog-id=231 op=UNLOAD Jan 23 23:35:06.950000 audit[5115]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5045 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435643361623238333762663532613738306431623934643163393966 Jan 23 23:35:06.950000 audit: BPF prog-id=233 op=LOAD Jan 23 23:35:06.950000 audit[5115]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5045 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:06.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435643361623238333762663532613738306431623934643163393966 Jan 23 23:35:07.005285 systemd-networkd[1781]: cali2e8b0de17fc: Link UP Jan 23 23:35:07.007354 systemd-networkd[1781]: cali2e8b0de17fc: Gained carrier Jan 23 23:35:07.027299 containerd[1972]: time="2026-01-23T23:35:07.027061761Z" level=info msg="StartContainer for \"45d3ab2837bf52a780d1b94d1c99ff64b1489dc4ba36b2b3e6ce5ced883e8c35\" returns successfully" Jan 23 23:35:07.070460 containerd[1972]: 2026-01-23 23:35:06.775 [INFO][5091] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--100-k8s-calico--apiserver--778cf5d48d--s7h4h-eth0 calico-apiserver-778cf5d48d- calico-apiserver 64fe3344-4e80-444f-ba70-e34e02720a15 866 0 2026-01-23 23:34:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:778cf5d48d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-100 calico-apiserver-778cf5d48d-s7h4h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2e8b0de17fc [] [] }} ContainerID="f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61" Namespace="calico-apiserver" Pod="calico-apiserver-778cf5d48d-s7h4h" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--apiserver--778cf5d48d--s7h4h-" Jan 23 23:35:07.070460 containerd[1972]: 2026-01-23 23:35:06.775 [INFO][5091] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61" Namespace="calico-apiserver" Pod="calico-apiserver-778cf5d48d-s7h4h" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--apiserver--778cf5d48d--s7h4h-eth0" Jan 23 23:35:07.070460 containerd[1972]: 2026-01-23 23:35:06.857 [INFO][5110] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61" HandleID="k8s-pod-network.f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61" Workload="ip--172--31--23--100-k8s-calico--apiserver--778cf5d48d--s7h4h-eth0" Jan 23 23:35:07.070460 containerd[1972]: 2026-01-23 23:35:06.859 [INFO][5110] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61" HandleID="k8s-pod-network.f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61" Workload="ip--172--31--23--100-k8s-calico--apiserver--778cf5d48d--s7h4h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d38f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-100", "pod":"calico-apiserver-778cf5d48d-s7h4h", "timestamp":"2026-01-23 23:35:06.857792112 +0000 UTC"}, Hostname:"ip-172-31-23-100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:35:07.070460 containerd[1972]: 2026-01-23 23:35:06.859 [INFO][5110] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:35:07.070460 containerd[1972]: 2026-01-23 23:35:06.859 [INFO][5110] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:35:07.070460 containerd[1972]: 2026-01-23 23:35:06.859 [INFO][5110] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-100' Jan 23 23:35:07.070460 containerd[1972]: 2026-01-23 23:35:06.891 [INFO][5110] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61" host="ip-172-31-23-100" Jan 23 23:35:07.070460 containerd[1972]: 2026-01-23 23:35:06.908 [INFO][5110] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-100" Jan 23 23:35:07.070460 containerd[1972]: 2026-01-23 23:35:06.918 [INFO][5110] ipam/ipam.go 511: Trying affinity for 192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:07.070460 containerd[1972]: 2026-01-23 23:35:06.928 [INFO][5110] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:07.070460 containerd[1972]: 2026-01-23 23:35:06.936 [INFO][5110] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:07.070460 containerd[1972]: 2026-01-23 23:35:06.937 [INFO][5110] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61" host="ip-172-31-23-100" Jan 23 23:35:07.070460 containerd[1972]: 2026-01-23 23:35:06.942 [INFO][5110] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61 Jan 23 23:35:07.070460 containerd[1972]: 2026-01-23 23:35:06.956 [INFO][5110] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61" host="ip-172-31-23-100" Jan 23 23:35:07.070460 containerd[1972]: 2026-01-23 23:35:06.979 [INFO][5110] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.5/26] block=192.168.22.0/26 handle="k8s-pod-network.f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61" host="ip-172-31-23-100" Jan 23 23:35:07.070460 containerd[1972]: 2026-01-23 23:35:06.980 [INFO][5110] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.5/26] handle="k8s-pod-network.f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61" host="ip-172-31-23-100" Jan 23 23:35:07.070460 containerd[1972]: 2026-01-23 23:35:06.981 [INFO][5110] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:35:07.070460 containerd[1972]: 2026-01-23 23:35:06.982 [INFO][5110] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.5/26] IPv6=[] ContainerID="f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61" HandleID="k8s-pod-network.f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61" Workload="ip--172--31--23--100-k8s-calico--apiserver--778cf5d48d--s7h4h-eth0" Jan 23 23:35:07.071585 containerd[1972]: 2026-01-23 23:35:06.992 [INFO][5091] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61" Namespace="calico-apiserver" Pod="calico-apiserver-778cf5d48d-s7h4h" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--apiserver--778cf5d48d--s7h4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--100-k8s-calico--apiserver--778cf5d48d--s7h4h-eth0", GenerateName:"calico-apiserver-778cf5d48d-", Namespace:"calico-apiserver", SelfLink:"", UID:"64fe3344-4e80-444f-ba70-e34e02720a15", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 34, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"778cf5d48d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-100", ContainerID:"", Pod:"calico-apiserver-778cf5d48d-s7h4h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e8b0de17fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:35:07.071585 containerd[1972]: 2026-01-23 23:35:06.992 [INFO][5091] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.5/32] ContainerID="f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61" Namespace="calico-apiserver" Pod="calico-apiserver-778cf5d48d-s7h4h" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--apiserver--778cf5d48d--s7h4h-eth0" Jan 23 23:35:07.071585 containerd[1972]: 2026-01-23 23:35:06.992 [INFO][5091] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e8b0de17fc ContainerID="f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61" Namespace="calico-apiserver" Pod="calico-apiserver-778cf5d48d-s7h4h" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--apiserver--778cf5d48d--s7h4h-eth0" Jan 23 23:35:07.071585 containerd[1972]: 2026-01-23 23:35:07.013 [INFO][5091] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61" Namespace="calico-apiserver" Pod="calico-apiserver-778cf5d48d-s7h4h" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--apiserver--778cf5d48d--s7h4h-eth0" Jan 23 23:35:07.071585 containerd[1972]: 2026-01-23 23:35:07.018 [INFO][5091] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61" Namespace="calico-apiserver" Pod="calico-apiserver-778cf5d48d-s7h4h" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--apiserver--778cf5d48d--s7h4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--100-k8s-calico--apiserver--778cf5d48d--s7h4h-eth0", GenerateName:"calico-apiserver-778cf5d48d-", Namespace:"calico-apiserver", SelfLink:"", UID:"64fe3344-4e80-444f-ba70-e34e02720a15", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 34, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"778cf5d48d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-100", ContainerID:"f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61", Pod:"calico-apiserver-778cf5d48d-s7h4h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e8b0de17fc", MAC:"d2:3b:a7:f6:a1:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:35:07.071585 containerd[1972]: 2026-01-23 23:35:07.057 [INFO][5091] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61" Namespace="calico-apiserver" Pod="calico-apiserver-778cf5d48d-s7h4h" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--apiserver--778cf5d48d--s7h4h-eth0" Jan 23 23:35:07.076621 kubelet[3413]: E0123 23:35:07.074096 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b9c97bcf-96pwk" podUID="1b0b2743-62ac-460d-ba7c-52d229e3b875" Jan 23 23:35:07.086028 systemd-networkd[1781]: calibbb2ba1024b: Gained IPv6LL Jan 23 23:35:07.141548 containerd[1972]: time="2026-01-23T23:35:07.141023217Z" level=info msg="connecting to shim f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61" address="unix:///run/containerd/s/92847e7d748fcdccd38e4871bbb3256367a6ae7d98b43c3171a8b31b466484d2" namespace=k8s.io protocol=ttrpc version=3 Jan 23 23:35:07.154205 kubelet[3413]: I0123 23:35:07.153973 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5rq9p" podStartSLOduration=51.15394243 podStartE2EDuration="51.15394243s" podCreationTimestamp="2026-01-23 23:34:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:35:07.142525365 +0000 UTC m=+56.802487183" watchObservedRunningTime="2026-01-23 23:35:07.15394243 +0000 UTC m=+56.813904308" Jan 23 23:35:07.203990 containerd[1972]: time="2026-01-23T23:35:07.202788118Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:07.208437 containerd[1972]: time="2026-01-23T23:35:07.208180630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:07.210768 containerd[1972]: time="2026-01-23T23:35:07.210585334Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:35:07.211117 kubelet[3413]: E0123 23:35:07.211055 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:35:07.211117 kubelet[3413]: E0123 23:35:07.211127 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:35:07.212372 kubelet[3413]: E0123 23:35:07.212259 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4q2h7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-57b65b9-26wjv_calico-system(6c551026-ffac-43ea-999f-0823acd8fbb1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:07.213776 kubelet[3413]: E0123 23:35:07.213445 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57b65b9-26wjv" podUID="6c551026-ffac-43ea-999f-0823acd8fbb1" Jan 23 23:35:07.237816 systemd[1]: Started cri-containerd-f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61.scope - libcontainer container f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61. Jan 23 23:35:07.328000 audit[5189]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=5189 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:35:07.328000 audit[5189]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc6849730 a2=0 a3=1 items=0 ppid=3525 pid=5189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:07.328000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:35:07.342000 audit[5189]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=5189 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:35:07.342000 audit[5189]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffc6849730 a2=0 a3=1 items=0 ppid=3525 pid=5189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:07.342000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:35:07.342000 audit[5195]: NETFILTER_CFG table=filter:132 family=2 entries=49 op=nft_register_chain pid=5195 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 23:35:07.342000 audit[5195]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25452 a0=3 a1=ffffd284c9b0 a2=0 a3=ffff8835bfa8 items=0 ppid=4633 pid=5195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:07.342000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 23:35:07.373000 audit: BPF prog-id=234 op=LOAD Jan 23 23:35:07.376000 audit: BPF prog-id=235 op=LOAD Jan 23 23:35:07.376000 audit[5169]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=5159 pid=5169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:07.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631336664396632653331613939396538366632396434643963323636 Jan 23 23:35:07.377000 audit: BPF prog-id=235 op=UNLOAD Jan 23 23:35:07.377000 audit[5169]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5159 pid=5169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:07.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631336664396632653331613939396538366632396434643963323636 Jan 23 23:35:07.377000 audit: BPF prog-id=236 op=LOAD Jan 23 23:35:07.377000 audit[5169]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=5159 pid=5169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:07.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631336664396632653331613939396538366632396434643963323636 Jan 23 23:35:07.377000 audit: BPF prog-id=237 op=LOAD Jan 23 23:35:07.377000 audit[5169]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=5159 pid=5169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:07.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631336664396632653331613939396538366632396434643963323636 Jan 23 23:35:07.377000 audit: BPF prog-id=237 op=UNLOAD Jan 23 23:35:07.377000 audit[5169]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5159 pid=5169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:07.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631336664396632653331613939396538366632396434643963323636 Jan 23 23:35:07.377000 audit: BPF prog-id=236 op=UNLOAD Jan 23 23:35:07.377000 audit[5169]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5159 pid=5169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:07.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631336664396632653331613939396538366632396434643963323636 Jan 23 23:35:07.377000 audit: BPF prog-id=238 op=LOAD Jan 23 23:35:07.377000 audit[5169]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=5159 pid=5169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:07.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631336664396632653331613939396538366632396434643963323636 Jan 23 23:35:07.415000 audit[5197]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=5197 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:35:07.415000 audit[5197]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc89b5090 a2=0 a3=1 items=0 ppid=3525 pid=5197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:07.415000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:35:07.424000 audit[5197]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=5197 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:35:07.424000 audit[5197]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffc89b5090 a2=0 a3=1 items=0 ppid=3525 pid=5197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:07.424000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:35:07.465176 containerd[1972]: time="2026-01-23T23:35:07.464956991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-778cf5d48d-s7h4h,Uid:64fe3344-4e80-444f-ba70-e34e02720a15,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f13fd9f2e31a999e86f29d4d9c2664c6cc0bcce66217abf548f7346fce67af61\"" Jan 23 23:35:07.469863 containerd[1972]: time="2026-01-23T23:35:07.469390907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:35:07.607170 containerd[1972]: time="2026-01-23T23:35:07.607033452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b9c97bcf-2xtt8,Uid:5be57303-da73-45f8-8222-a093d6ce8129,Namespace:calico-apiserver,Attempt:0,}" Jan 23 23:35:07.662642 systemd-networkd[1781]: cali65be505e4e4: Gained IPv6LL Jan 23 23:35:07.724968 containerd[1972]: time="2026-01-23T23:35:07.724872048Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:07.730489 containerd[1972]: time="2026-01-23T23:35:07.730413384Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:35:07.730858 containerd[1972]: time="2026-01-23T23:35:07.730556460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:07.731380 kubelet[3413]: E0123 23:35:07.731204 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:35:07.731757 kubelet[3413]: E0123 23:35:07.731342 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:35:07.733089 kubelet[3413]: E0123 23:35:07.732804 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82mcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-778cf5d48d-s7h4h_calico-apiserver(64fe3344-4e80-444f-ba70-e34e02720a15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:07.734897 kubelet[3413]: E0123 23:35:07.734726 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-778cf5d48d-s7h4h" podUID="64fe3344-4e80-444f-ba70-e34e02720a15" Jan 23 23:35:07.834954 systemd-networkd[1781]: cali65c71e3294b: Link UP Jan 23 23:35:07.836685 systemd-networkd[1781]: cali65c71e3294b: Gained carrier Jan 23 23:35:07.872038 containerd[1972]: 2026-01-23 23:35:07.701 [INFO][5206] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--2xtt8-eth0 calico-apiserver-68b9c97bcf- calico-apiserver 5be57303-da73-45f8-8222-a093d6ce8129 864 0 2026-01-23 23:34:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68b9c97bcf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-100 calico-apiserver-68b9c97bcf-2xtt8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali65c71e3294b [] [] }} ContainerID="e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7" Namespace="calico-apiserver" Pod="calico-apiserver-68b9c97bcf-2xtt8" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--2xtt8-" Jan 23 23:35:07.872038 containerd[1972]: 2026-01-23 23:35:07.702 [INFO][5206] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7" Namespace="calico-apiserver" Pod="calico-apiserver-68b9c97bcf-2xtt8" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--2xtt8-eth0" Jan 23 23:35:07.872038 containerd[1972]: 2026-01-23 23:35:07.761 [INFO][5219] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7" HandleID="k8s-pod-network.e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7" Workload="ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--2xtt8-eth0" Jan 23 23:35:07.872038 containerd[1972]: 2026-01-23 23:35:07.761 [INFO][5219] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7" HandleID="k8s-pod-network.e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7" Workload="ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--2xtt8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3da0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-100", "pod":"calico-apiserver-68b9c97bcf-2xtt8", "timestamp":"2026-01-23 23:35:07.761098189 +0000 UTC"}, Hostname:"ip-172-31-23-100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:35:07.872038 containerd[1972]: 2026-01-23 23:35:07.761 [INFO][5219] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:35:07.872038 containerd[1972]: 2026-01-23 23:35:07.761 [INFO][5219] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:35:07.872038 containerd[1972]: 2026-01-23 23:35:07.761 [INFO][5219] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-100' Jan 23 23:35:07.872038 containerd[1972]: 2026-01-23 23:35:07.779 [INFO][5219] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7" host="ip-172-31-23-100" Jan 23 23:35:07.872038 containerd[1972]: 2026-01-23 23:35:07.786 [INFO][5219] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-100" Jan 23 23:35:07.872038 containerd[1972]: 2026-01-23 23:35:07.793 [INFO][5219] ipam/ipam.go 511: Trying affinity for 192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:07.872038 containerd[1972]: 2026-01-23 23:35:07.796 [INFO][5219] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:07.872038 containerd[1972]: 2026-01-23 23:35:07.800 [INFO][5219] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:07.872038 containerd[1972]: 2026-01-23 23:35:07.801 [INFO][5219] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7" host="ip-172-31-23-100" Jan 23 23:35:07.872038 containerd[1972]: 2026-01-23 23:35:07.804 [INFO][5219] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7 Jan 23 23:35:07.872038 containerd[1972]: 2026-01-23 23:35:07.811 [INFO][5219] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7" host="ip-172-31-23-100" Jan 23 23:35:07.872038 containerd[1972]: 2026-01-23 23:35:07.822 [INFO][5219] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.6/26] block=192.168.22.0/26 handle="k8s-pod-network.e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7" host="ip-172-31-23-100" Jan 23 23:35:07.872038 containerd[1972]: 2026-01-23 23:35:07.822 [INFO][5219] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.6/26] handle="k8s-pod-network.e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7" host="ip-172-31-23-100" Jan 23 23:35:07.872038 containerd[1972]: 2026-01-23 23:35:07.822 [INFO][5219] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:35:07.872038 containerd[1972]: 2026-01-23 23:35:07.822 [INFO][5219] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.6/26] IPv6=[] ContainerID="e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7" HandleID="k8s-pod-network.e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7" Workload="ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--2xtt8-eth0" Jan 23 23:35:07.876041 containerd[1972]: 2026-01-23 23:35:07.828 [INFO][5206] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7" Namespace="calico-apiserver" Pod="calico-apiserver-68b9c97bcf-2xtt8" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--2xtt8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--2xtt8-eth0", GenerateName:"calico-apiserver-68b9c97bcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"5be57303-da73-45f8-8222-a093d6ce8129", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 34, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68b9c97bcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-100", ContainerID:"", Pod:"calico-apiserver-68b9c97bcf-2xtt8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65c71e3294b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:35:07.876041 containerd[1972]: 2026-01-23 23:35:07.830 [INFO][5206] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.6/32] ContainerID="e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7" Namespace="calico-apiserver" Pod="calico-apiserver-68b9c97bcf-2xtt8" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--2xtt8-eth0" Jan 23 23:35:07.876041 containerd[1972]: 2026-01-23 23:35:07.830 [INFO][5206] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65c71e3294b ContainerID="e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7" Namespace="calico-apiserver" Pod="calico-apiserver-68b9c97bcf-2xtt8" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--2xtt8-eth0" Jan 23 23:35:07.876041 containerd[1972]: 2026-01-23 23:35:07.838 [INFO][5206] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7" Namespace="calico-apiserver" Pod="calico-apiserver-68b9c97bcf-2xtt8" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--2xtt8-eth0" Jan 23 23:35:07.876041 containerd[1972]: 2026-01-23 23:35:07.839 [INFO][5206] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7" Namespace="calico-apiserver" Pod="calico-apiserver-68b9c97bcf-2xtt8" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--2xtt8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--2xtt8-eth0", GenerateName:"calico-apiserver-68b9c97bcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"5be57303-da73-45f8-8222-a093d6ce8129", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 34, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68b9c97bcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-100", ContainerID:"e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7", Pod:"calico-apiserver-68b9c97bcf-2xtt8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65c71e3294b", MAC:"ca:28:aa:b6:d8:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:35:07.876041 containerd[1972]: 2026-01-23 23:35:07.865 [INFO][5206] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7" Namespace="calico-apiserver" Pod="calico-apiserver-68b9c97bcf-2xtt8" WorkloadEndpoint="ip--172--31--23--100-k8s-calico--apiserver--68b9c97bcf--2xtt8-eth0" Jan 23 23:35:07.929000 audit[5232]: NETFILTER_CFG table=filter:135 family=2 entries=59 op=nft_register_chain pid=5232 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 23:35:07.929000 audit[5232]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29492 a0=3 a1=ffffc78c2fb0 a2=0 a3=ffffba1c4fa8 items=0 ppid=4633 pid=5232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:07.929000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 23:35:07.933787 containerd[1972]: time="2026-01-23T23:35:07.933709357Z" level=info msg="connecting to shim e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7" address="unix:///run/containerd/s/b35c131a727b2a2af6571f199e28f1952386f96c0ed0f474fc8905012ddc6959" namespace=k8s.io protocol=ttrpc version=3 Jan 23 23:35:07.995293 systemd[1]: Started cri-containerd-e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7.scope - libcontainer container e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7. Jan 23 23:35:08.019000 audit: BPF prog-id=239 op=LOAD Jan 23 23:35:08.020000 audit: BPF prog-id=240 op=LOAD Jan 23 23:35:08.020000 audit[5254]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5241 pid=5254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:08.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530643234336432613633633863613466656637613862313435646430 Jan 23 23:35:08.021000 audit: BPF prog-id=240 op=UNLOAD Jan 23 23:35:08.021000 audit[5254]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5241 pid=5254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:08.021000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530643234336432613633633863613466656637613862313435646430 Jan 23 23:35:08.021000 audit: BPF prog-id=241 op=LOAD Jan 23 23:35:08.021000 audit[5254]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5241 pid=5254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:08.021000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530643234336432613633633863613466656637613862313435646430 Jan 23 23:35:08.021000 audit: BPF prog-id=242 op=LOAD Jan 23 23:35:08.021000 audit[5254]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5241 pid=5254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:08.021000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530643234336432613633633863613466656637613862313435646430 Jan 23 23:35:08.022000 audit: BPF prog-id=242 op=UNLOAD Jan 23 23:35:08.022000 audit[5254]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5241 pid=5254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:08.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530643234336432613633633863613466656637613862313435646430 Jan 23 23:35:08.022000 audit: BPF prog-id=241 op=UNLOAD Jan 23 23:35:08.022000 audit[5254]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5241 pid=5254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:08.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530643234336432613633633863613466656637613862313435646430 Jan 23 23:35:08.022000 audit: BPF prog-id=243 op=LOAD Jan 23 23:35:08.022000 audit[5254]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5241 pid=5254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:08.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530643234336432613633633863613466656637613862313435646430 Jan 23 23:35:08.046174 systemd-networkd[1781]: califf33d86163e: Gained IPv6LL Jan 23 23:35:08.078888 containerd[1972]: time="2026-01-23T23:35:08.078829546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b9c97bcf-2xtt8,Uid:5be57303-da73-45f8-8222-a093d6ce8129,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e0d243d2a63c8ca4fef7a8b145dd09d7d2bdd1fd2d518955072d9006d7c277e7\"" Jan 23 23:35:08.082336 containerd[1972]: time="2026-01-23T23:35:08.082274518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:35:08.109508 kubelet[3413]: E0123 23:35:08.109230 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-778cf5d48d-s7h4h" podUID="64fe3344-4e80-444f-ba70-e34e02720a15" Jan 23 23:35:08.117935 kubelet[3413]: E0123 23:35:08.117722 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57b65b9-26wjv" podUID="6c551026-ffac-43ea-999f-0823acd8fbb1" Jan 23 23:35:08.118738 kubelet[3413]: E0123 23:35:08.117899 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b9c97bcf-96pwk" podUID="1b0b2743-62ac-460d-ba7c-52d229e3b875" Jan 23 23:35:08.241000 audit[5282]: NETFILTER_CFG table=filter:136 family=2 entries=17 op=nft_register_rule pid=5282 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:35:08.241000 audit[5282]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe20de130 a2=0 a3=1 items=0 ppid=3525 pid=5282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:08.241000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:35:08.247000 audit[5282]: NETFILTER_CFG table=nat:137 family=2 entries=35 op=nft_register_chain pid=5282 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:35:08.247000 audit[5282]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffe20de130 a2=0 a3=1 items=0 ppid=3525 pid=5282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:08.247000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:35:08.510274 containerd[1972]: time="2026-01-23T23:35:08.510114636Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:08.512351 containerd[1972]: time="2026-01-23T23:35:08.512276040Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:35:08.512614 containerd[1972]: time="2026-01-23T23:35:08.512314752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:08.512673 kubelet[3413]: E0123 23:35:08.512589 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:35:08.512673 kubelet[3413]: E0123 23:35:08.512648 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:35:08.512925 kubelet[3413]: E0123 23:35:08.512829 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pv8tg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68b9c97bcf-2xtt8_calico-apiserver(5be57303-da73-45f8-8222-a093d6ce8129): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:08.514171 kubelet[3413]: E0123 23:35:08.514089 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b9c97bcf-2xtt8" podUID="5be57303-da73-45f8-8222-a093d6ce8129" Jan 23 23:35:08.614411 containerd[1972]: time="2026-01-23T23:35:08.613729693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cvgjz,Uid:bc435867-361d-4b3f-a3e1-96c440fc0a66,Namespace:calico-system,Attempt:0,}" Jan 23 23:35:08.813187 systemd-networkd[1781]: cali2e8b0de17fc: Gained IPv6LL Jan 23 23:35:08.845073 systemd-networkd[1781]: calie55e9b51401: Link UP Jan 23 23:35:08.847209 systemd-networkd[1781]: calie55e9b51401: Gained carrier Jan 23 23:35:08.877027 containerd[1972]: 2026-01-23 23:35:08.704 [INFO][5284] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--100-k8s-csi--node--driver--cvgjz-eth0 csi-node-driver- calico-system bc435867-361d-4b3f-a3e1-96c440fc0a66 758 0 2026-01-23 23:34:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-23-100 csi-node-driver-cvgjz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie55e9b51401 [] [] }} ContainerID="24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524" Namespace="calico-system" Pod="csi-node-driver-cvgjz" WorkloadEndpoint="ip--172--31--23--100-k8s-csi--node--driver--cvgjz-" Jan 23 23:35:08.877027 containerd[1972]: 2026-01-23 23:35:08.705 [INFO][5284] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524" Namespace="calico-system" Pod="csi-node-driver-cvgjz" WorkloadEndpoint="ip--172--31--23--100-k8s-csi--node--driver--cvgjz-eth0" Jan 23 23:35:08.877027 containerd[1972]: 2026-01-23 23:35:08.758 [INFO][5296] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524" HandleID="k8s-pod-network.24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524" Workload="ip--172--31--23--100-k8s-csi--node--driver--cvgjz-eth0" Jan 23 23:35:08.877027 containerd[1972]: 2026-01-23 23:35:08.758 [INFO][5296] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524" HandleID="k8s-pod-network.24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524" Workload="ip--172--31--23--100-k8s-csi--node--driver--cvgjz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3880), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-100", "pod":"csi-node-driver-cvgjz", "timestamp":"2026-01-23 23:35:08.758135641 +0000 UTC"}, Hostname:"ip-172-31-23-100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:35:08.877027 containerd[1972]: 2026-01-23 23:35:08.758 [INFO][5296] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:35:08.877027 containerd[1972]: 2026-01-23 23:35:08.758 [INFO][5296] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:35:08.877027 containerd[1972]: 2026-01-23 23:35:08.758 [INFO][5296] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-100' Jan 23 23:35:08.877027 containerd[1972]: 2026-01-23 23:35:08.774 [INFO][5296] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524" host="ip-172-31-23-100" Jan 23 23:35:08.877027 containerd[1972]: 2026-01-23 23:35:08.781 [INFO][5296] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-100" Jan 23 23:35:08.877027 containerd[1972]: 2026-01-23 23:35:08.792 [INFO][5296] ipam/ipam.go 511: Trying affinity for 192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:08.877027 containerd[1972]: 2026-01-23 23:35:08.795 [INFO][5296] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:08.877027 containerd[1972]: 2026-01-23 23:35:08.799 [INFO][5296] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:08.877027 containerd[1972]: 2026-01-23 23:35:08.799 [INFO][5296] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524" host="ip-172-31-23-100" Jan 23 23:35:08.877027 containerd[1972]: 2026-01-23 23:35:08.802 [INFO][5296] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524 Jan 23 23:35:08.877027 containerd[1972]: 2026-01-23 23:35:08.811 [INFO][5296] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524" host="ip-172-31-23-100" Jan 23 23:35:08.877027 containerd[1972]: 2026-01-23 23:35:08.828 [INFO][5296] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.7/26] block=192.168.22.0/26 handle="k8s-pod-network.24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524" host="ip-172-31-23-100" Jan 23 23:35:08.877027 containerd[1972]: 2026-01-23 23:35:08.829 [INFO][5296] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.7/26] handle="k8s-pod-network.24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524" host="ip-172-31-23-100" Jan 23 23:35:08.877027 containerd[1972]: 2026-01-23 23:35:08.829 [INFO][5296] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:35:08.877027 containerd[1972]: 2026-01-23 23:35:08.829 [INFO][5296] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.7/26] IPv6=[] ContainerID="24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524" HandleID="k8s-pod-network.24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524" Workload="ip--172--31--23--100-k8s-csi--node--driver--cvgjz-eth0" Jan 23 23:35:08.882212 containerd[1972]: 2026-01-23 23:35:08.834 [INFO][5284] cni-plugin/k8s.go 418: Populated endpoint ContainerID="24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524" Namespace="calico-system" Pod="csi-node-driver-cvgjz" WorkloadEndpoint="ip--172--31--23--100-k8s-csi--node--driver--cvgjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--100-k8s-csi--node--driver--cvgjz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bc435867-361d-4b3f-a3e1-96c440fc0a66", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 34, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-100", ContainerID:"", Pod:"csi-node-driver-cvgjz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.22.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie55e9b51401", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:35:08.882212 containerd[1972]: 2026-01-23 23:35:08.835 [INFO][5284] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.7/32] ContainerID="24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524" Namespace="calico-system" Pod="csi-node-driver-cvgjz" WorkloadEndpoint="ip--172--31--23--100-k8s-csi--node--driver--cvgjz-eth0" Jan 23 23:35:08.882212 containerd[1972]: 2026-01-23 23:35:08.835 [INFO][5284] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie55e9b51401 ContainerID="24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524" Namespace="calico-system" Pod="csi-node-driver-cvgjz" WorkloadEndpoint="ip--172--31--23--100-k8s-csi--node--driver--cvgjz-eth0" Jan 23 23:35:08.882212 containerd[1972]: 2026-01-23 23:35:08.848 [INFO][5284] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524" Namespace="calico-system" Pod="csi-node-driver-cvgjz" WorkloadEndpoint="ip--172--31--23--100-k8s-csi--node--driver--cvgjz-eth0" Jan 23 23:35:08.882212 containerd[1972]: 2026-01-23 23:35:08.849 [INFO][5284] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524" Namespace="calico-system" Pod="csi-node-driver-cvgjz" WorkloadEndpoint="ip--172--31--23--100-k8s-csi--node--driver--cvgjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--100-k8s-csi--node--driver--cvgjz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bc435867-361d-4b3f-a3e1-96c440fc0a66", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 34, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-100", ContainerID:"24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524", Pod:"csi-node-driver-cvgjz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.22.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie55e9b51401", MAC:"1a:66:45:24:ec:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:35:08.882212 containerd[1972]: 2026-01-23 23:35:08.868 [INFO][5284] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524" Namespace="calico-system" Pod="csi-node-driver-cvgjz" WorkloadEndpoint="ip--172--31--23--100-k8s-csi--node--driver--cvgjz-eth0" Jan 23 23:35:08.935989 containerd[1972]: time="2026-01-23T23:35:08.935723942Z" level=info msg="connecting to shim 24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524" address="unix:///run/containerd/s/c9f6b6ed3bc4c78c5ba1c02ec1654972d53f8605175309a90c9db03d721f572b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 23:35:08.940000 audit[5322]: NETFILTER_CFG table=filter:138 family=2 entries=52 op=nft_register_chain pid=5322 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 23:35:08.940000 audit[5322]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24312 a0=3 a1=ffffd86c4420 a2=0 a3=ffffa7992fa8 items=0 ppid=4633 pid=5322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:08.940000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 23:35:08.989467 systemd[1]: Started cri-containerd-24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524.scope - libcontainer container 24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524. Jan 23 23:35:09.010000 audit: BPF prog-id=244 op=LOAD Jan 23 23:35:09.011000 audit: BPF prog-id=245 op=LOAD Jan 23 23:35:09.011000 audit[5335]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=5321 pid=5335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:09.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234383533373739653935613938353166323533616132376535333231 Jan 23 23:35:09.011000 audit: BPF prog-id=245 op=UNLOAD Jan 23 23:35:09.011000 audit[5335]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5321 pid=5335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:09.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234383533373739653935613938353166323533616132376535333231 Jan 23 23:35:09.012000 audit: BPF prog-id=246 op=LOAD Jan 23 23:35:09.012000 audit[5335]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=5321 pid=5335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:09.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234383533373739653935613938353166323533616132376535333231 Jan 23 23:35:09.012000 audit: BPF prog-id=247 op=LOAD Jan 23 23:35:09.012000 audit[5335]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=5321 pid=5335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:09.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234383533373739653935613938353166323533616132376535333231 Jan 23 23:35:09.013000 audit: BPF prog-id=247 op=UNLOAD Jan 23 23:35:09.013000 audit[5335]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5321 pid=5335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:09.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234383533373739653935613938353166323533616132376535333231 Jan 23 23:35:09.013000 audit: BPF prog-id=246 op=UNLOAD Jan 23 23:35:09.013000 audit[5335]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5321 pid=5335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:09.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234383533373739653935613938353166323533616132376535333231 Jan 23 23:35:09.013000 audit: BPF prog-id=248 op=LOAD Jan 23 23:35:09.013000 audit[5335]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=5321 pid=5335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:09.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234383533373739653935613938353166323533616132376535333231 Jan 23 23:35:09.043353 containerd[1972]: time="2026-01-23T23:35:09.043202435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cvgjz,Uid:bc435867-361d-4b3f-a3e1-96c440fc0a66,Namespace:calico-system,Attempt:0,} returns sandbox id \"24853779e95a9851f253aa27e53211a33ae2c1134ce491ae3041caaf0837f524\"" Jan 23 23:35:09.046499 containerd[1972]: time="2026-01-23T23:35:09.046455503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:35:09.121616 kubelet[3413]: E0123 23:35:09.121414 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b9c97bcf-2xtt8" podUID="5be57303-da73-45f8-8222-a093d6ce8129" Jan 23 23:35:09.125027 kubelet[3413]: E0123 23:35:09.123825 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-778cf5d48d-s7h4h" podUID="64fe3344-4e80-444f-ba70-e34e02720a15" Jan 23 23:35:09.200000 audit[5363]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=5363 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:35:09.200000 audit[5363]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffbb82020 a2=0 a3=1 items=0 ppid=3525 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:09.200000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:35:09.210000 audit[5363]: NETFILTER_CFG table=nat:140 family=2 entries=20 op=nft_register_rule pid=5363 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:35:09.210000 audit[5363]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffffbb82020 a2=0 a3=1 items=0 ppid=3525 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:09.210000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:35:09.327705 containerd[1972]: time="2026-01-23T23:35:09.327597000Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:09.330129 containerd[1972]: time="2026-01-23T23:35:09.330053016Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:35:09.330675 containerd[1972]: time="2026-01-23T23:35:09.330428064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:09.331040 kubelet[3413]: E0123 23:35:09.330976 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:35:09.331707 kubelet[3413]: E0123 23:35:09.331048 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:35:09.331707 kubelet[3413]: E0123 23:35:09.331250 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqqlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-cvgjz_calico-system(bc435867-361d-4b3f-a3e1-96c440fc0a66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:09.334945 containerd[1972]: time="2026-01-23T23:35:09.334868340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:35:09.603367 containerd[1972]: time="2026-01-23T23:35:09.603293102Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:09.606713 containerd[1972]: time="2026-01-23T23:35:09.606331190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:35:09.606713 containerd[1972]: time="2026-01-23T23:35:09.606370874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:09.608116 containerd[1972]: time="2026-01-23T23:35:09.608043866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vqjnv,Uid:0699514f-51e2-4aa1-86de-4ee590fe63e1,Namespace:calico-system,Attempt:0,}" Jan 23 23:35:09.609229 kubelet[3413]: E0123 23:35:09.609059 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:35:09.609229 kubelet[3413]: E0123 23:35:09.609138 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:35:09.610233 kubelet[3413]: E0123 23:35:09.609379 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqqlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-cvgjz_calico-system(bc435867-361d-4b3f-a3e1-96c440fc0a66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:09.610948 containerd[1972]: time="2026-01-23T23:35:09.610647470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dstgq,Uid:8786d69c-e728-4edd-874c-d71c00ce627e,Namespace:kube-system,Attempt:0,}" Jan 23 23:35:09.611361 kubelet[3413]: E0123 23:35:09.611279 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cvgjz" podUID="bc435867-361d-4b3f-a3e1-96c440fc0a66" Jan 23 23:35:09.837536 systemd-networkd[1781]: cali65c71e3294b: Gained IPv6LL Jan 23 23:35:10.019989 systemd-networkd[1781]: cali65f47a48f1f: Link UP Jan 23 23:35:10.022436 systemd-networkd[1781]: cali65f47a48f1f: Gained carrier Jan 23 23:35:10.060860 containerd[1972]: 2026-01-23 23:35:09.757 [INFO][5364] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--100-k8s-goldmane--666569f655--vqjnv-eth0 goldmane-666569f655- calico-system 0699514f-51e2-4aa1-86de-4ee590fe63e1 867 0 2026-01-23 23:34:40 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-23-100 goldmane-666569f655-vqjnv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali65f47a48f1f [] [] }} ContainerID="c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912" Namespace="calico-system" Pod="goldmane-666569f655-vqjnv" WorkloadEndpoint="ip--172--31--23--100-k8s-goldmane--666569f655--vqjnv-" Jan 23 23:35:10.060860 containerd[1972]: 2026-01-23 23:35:09.758 [INFO][5364] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912" Namespace="calico-system" Pod="goldmane-666569f655-vqjnv" WorkloadEndpoint="ip--172--31--23--100-k8s-goldmane--666569f655--vqjnv-eth0" Jan 23 23:35:10.060860 containerd[1972]: 2026-01-23 23:35:09.903 [INFO][5390] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912" HandleID="k8s-pod-network.c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912" Workload="ip--172--31--23--100-k8s-goldmane--666569f655--vqjnv-eth0" Jan 23 23:35:10.060860 containerd[1972]: 2026-01-23 23:35:09.903 [INFO][5390] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912" HandleID="k8s-pod-network.c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912" Workload="ip--172--31--23--100-k8s-goldmane--666569f655--vqjnv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400038f220), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-100", "pod":"goldmane-666569f655-vqjnv", "timestamp":"2026-01-23 23:35:09.902989503 +0000 UTC"}, Hostname:"ip-172-31-23-100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:35:10.060860 containerd[1972]: 2026-01-23 23:35:09.903 [INFO][5390] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:35:10.060860 containerd[1972]: 2026-01-23 23:35:09.903 [INFO][5390] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:35:10.060860 containerd[1972]: 2026-01-23 23:35:09.903 [INFO][5390] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-100' Jan 23 23:35:10.060860 containerd[1972]: 2026-01-23 23:35:09.938 [INFO][5390] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912" host="ip-172-31-23-100" Jan 23 23:35:10.060860 containerd[1972]: 2026-01-23 23:35:09.951 [INFO][5390] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-100" Jan 23 23:35:10.060860 containerd[1972]: 2026-01-23 23:35:09.961 [INFO][5390] ipam/ipam.go 511: Trying affinity for 192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:10.060860 containerd[1972]: 2026-01-23 23:35:09.965 [INFO][5390] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:10.060860 containerd[1972]: 2026-01-23 23:35:09.973 [INFO][5390] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:10.060860 containerd[1972]: 2026-01-23 23:35:09.973 [INFO][5390] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912" host="ip-172-31-23-100" Jan 23 23:35:10.060860 containerd[1972]: 2026-01-23 23:35:09.978 [INFO][5390] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912 Jan 23 23:35:10.060860 containerd[1972]: 2026-01-23 23:35:09.990 [INFO][5390] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912" host="ip-172-31-23-100" Jan 23 23:35:10.060860 containerd[1972]: 2026-01-23 23:35:10.003 [INFO][5390] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.8/26] block=192.168.22.0/26 handle="k8s-pod-network.c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912" host="ip-172-31-23-100" Jan 23 23:35:10.060860 containerd[1972]: 2026-01-23 23:35:10.003 [INFO][5390] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.8/26] handle="k8s-pod-network.c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912" host="ip-172-31-23-100" Jan 23 23:35:10.060860 containerd[1972]: 2026-01-23 23:35:10.004 [INFO][5390] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:35:10.060860 containerd[1972]: 2026-01-23 23:35:10.004 [INFO][5390] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.8/26] IPv6=[] ContainerID="c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912" HandleID="k8s-pod-network.c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912" Workload="ip--172--31--23--100-k8s-goldmane--666569f655--vqjnv-eth0" Jan 23 23:35:10.066423 containerd[1972]: 2026-01-23 23:35:10.010 [INFO][5364] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912" Namespace="calico-system" Pod="goldmane-666569f655-vqjnv" WorkloadEndpoint="ip--172--31--23--100-k8s-goldmane--666569f655--vqjnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--100-k8s-goldmane--666569f655--vqjnv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0699514f-51e2-4aa1-86de-4ee590fe63e1", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 34, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-100", ContainerID:"", Pod:"goldmane-666569f655-vqjnv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.22.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali65f47a48f1f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:35:10.066423 containerd[1972]: 2026-01-23 23:35:10.010 [INFO][5364] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.8/32] ContainerID="c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912" Namespace="calico-system" Pod="goldmane-666569f655-vqjnv" WorkloadEndpoint="ip--172--31--23--100-k8s-goldmane--666569f655--vqjnv-eth0" Jan 23 23:35:10.066423 containerd[1972]: 2026-01-23 23:35:10.010 [INFO][5364] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65f47a48f1f ContainerID="c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912" Namespace="calico-system" Pod="goldmane-666569f655-vqjnv" WorkloadEndpoint="ip--172--31--23--100-k8s-goldmane--666569f655--vqjnv-eth0" Jan 23 23:35:10.066423 containerd[1972]: 2026-01-23 23:35:10.015 [INFO][5364] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912" Namespace="calico-system" Pod="goldmane-666569f655-vqjnv" WorkloadEndpoint="ip--172--31--23--100-k8s-goldmane--666569f655--vqjnv-eth0" Jan 23 23:35:10.066423 containerd[1972]: 2026-01-23 23:35:10.016 [INFO][5364] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912" Namespace="calico-system" Pod="goldmane-666569f655-vqjnv" WorkloadEndpoint="ip--172--31--23--100-k8s-goldmane--666569f655--vqjnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--100-k8s-goldmane--666569f655--vqjnv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0699514f-51e2-4aa1-86de-4ee590fe63e1", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 34, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-100", ContainerID:"c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912", Pod:"goldmane-666569f655-vqjnv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.22.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali65f47a48f1f", MAC:"b6:dd:49:17:6f:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:35:10.066423 containerd[1972]: 2026-01-23 23:35:10.049 [INFO][5364] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912" Namespace="calico-system" Pod="goldmane-666569f655-vqjnv" WorkloadEndpoint="ip--172--31--23--100-k8s-goldmane--666569f655--vqjnv-eth0" Jan 23 23:35:10.134322 kubelet[3413]: E0123 23:35:10.134230 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cvgjz" podUID="bc435867-361d-4b3f-a3e1-96c440fc0a66" Jan 23 23:35:10.144446 containerd[1972]: time="2026-01-23T23:35:10.144282708Z" level=info msg="connecting to shim c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912" address="unix:///run/containerd/s/b8af6a9f824cfa17daea1a46268fe5796c20f90d0add3cb0fac029111182c2ef" namespace=k8s.io protocol=ttrpc version=3 Jan 23 23:35:10.214142 systemd-networkd[1781]: cali84dccfc9666: Link UP Jan 23 23:35:10.218680 systemd-networkd[1781]: cali84dccfc9666: Gained carrier Jan 23 23:35:10.250612 systemd[1]: Started cri-containerd-c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912.scope - libcontainer container c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912. Jan 23 23:35:10.265370 containerd[1972]: 2026-01-23 23:35:09.778 [INFO][5374] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--100-k8s-coredns--674b8bbfcf--dstgq-eth0 coredns-674b8bbfcf- kube-system 8786d69c-e728-4edd-874c-d71c00ce627e 863 0 2026-01-23 23:34:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-100 coredns-674b8bbfcf-dstgq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali84dccfc9666 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf" Namespace="kube-system" Pod="coredns-674b8bbfcf-dstgq" WorkloadEndpoint="ip--172--31--23--100-k8s-coredns--674b8bbfcf--dstgq-" Jan 23 23:35:10.265370 containerd[1972]: 2026-01-23 23:35:09.779 [INFO][5374] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf" Namespace="kube-system" Pod="coredns-674b8bbfcf-dstgq" WorkloadEndpoint="ip--172--31--23--100-k8s-coredns--674b8bbfcf--dstgq-eth0" Jan 23 23:35:10.265370 containerd[1972]: 2026-01-23 23:35:09.920 [INFO][5396] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf" HandleID="k8s-pod-network.1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf" Workload="ip--172--31--23--100-k8s-coredns--674b8bbfcf--dstgq-eth0" Jan 23 23:35:10.265370 containerd[1972]: 2026-01-23 23:35:09.920 [INFO][5396] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf" HandleID="k8s-pod-network.1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf" Workload="ip--172--31--23--100-k8s-coredns--674b8bbfcf--dstgq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000338f40), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-100", "pod":"coredns-674b8bbfcf-dstgq", "timestamp":"2026-01-23 23:35:09.920214459 +0000 UTC"}, Hostname:"ip-172-31-23-100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:35:10.265370 containerd[1972]: 2026-01-23 23:35:09.921 [INFO][5396] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:35:10.265370 containerd[1972]: 2026-01-23 23:35:10.004 [INFO][5396] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:35:10.265370 containerd[1972]: 2026-01-23 23:35:10.005 [INFO][5396] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-100' Jan 23 23:35:10.265370 containerd[1972]: 2026-01-23 23:35:10.051 [INFO][5396] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf" host="ip-172-31-23-100" Jan 23 23:35:10.265370 containerd[1972]: 2026-01-23 23:35:10.069 [INFO][5396] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-100" Jan 23 23:35:10.265370 containerd[1972]: 2026-01-23 23:35:10.085 [INFO][5396] ipam/ipam.go 511: Trying affinity for 192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:10.265370 containerd[1972]: 2026-01-23 23:35:10.092 [INFO][5396] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:10.265370 containerd[1972]: 2026-01-23 23:35:10.102 [INFO][5396] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="ip-172-31-23-100" Jan 23 23:35:10.265370 containerd[1972]: 2026-01-23 23:35:10.102 [INFO][5396] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf" host="ip-172-31-23-100" Jan 23 23:35:10.265370 containerd[1972]: 2026-01-23 23:35:10.116 [INFO][5396] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf Jan 23 23:35:10.265370 containerd[1972]: 2026-01-23 23:35:10.128 [INFO][5396] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf" host="ip-172-31-23-100" Jan 23 23:35:10.265370 containerd[1972]: 2026-01-23 23:35:10.185 [INFO][5396] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.9/26] block=192.168.22.0/26 handle="k8s-pod-network.1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf" host="ip-172-31-23-100" Jan 23 23:35:10.265370 containerd[1972]: 2026-01-23 23:35:10.185 [INFO][5396] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.9/26] handle="k8s-pod-network.1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf" host="ip-172-31-23-100" Jan 23 23:35:10.265370 containerd[1972]: 2026-01-23 23:35:10.186 [INFO][5396] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:35:10.265370 containerd[1972]: 2026-01-23 23:35:10.186 [INFO][5396] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.9/26] IPv6=[] ContainerID="1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf" HandleID="k8s-pod-network.1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf" Workload="ip--172--31--23--100-k8s-coredns--674b8bbfcf--dstgq-eth0" Jan 23 23:35:10.269008 containerd[1972]: 2026-01-23 23:35:10.203 [INFO][5374] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf" Namespace="kube-system" Pod="coredns-674b8bbfcf-dstgq" WorkloadEndpoint="ip--172--31--23--100-k8s-coredns--674b8bbfcf--dstgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--100-k8s-coredns--674b8bbfcf--dstgq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8786d69c-e728-4edd-874c-d71c00ce627e", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 34, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-100", ContainerID:"", Pod:"coredns-674b8bbfcf-dstgq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84dccfc9666", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:35:10.269008 containerd[1972]: 2026-01-23 23:35:10.203 [INFO][5374] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.9/32] ContainerID="1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf" Namespace="kube-system" Pod="coredns-674b8bbfcf-dstgq" WorkloadEndpoint="ip--172--31--23--100-k8s-coredns--674b8bbfcf--dstgq-eth0" Jan 23 23:35:10.269008 containerd[1972]: 2026-01-23 23:35:10.203 [INFO][5374] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84dccfc9666 ContainerID="1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf" Namespace="kube-system" Pod="coredns-674b8bbfcf-dstgq" WorkloadEndpoint="ip--172--31--23--100-k8s-coredns--674b8bbfcf--dstgq-eth0" Jan 23 23:35:10.269008 containerd[1972]: 2026-01-23 23:35:10.217 [INFO][5374] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf" Namespace="kube-system" Pod="coredns-674b8bbfcf-dstgq" WorkloadEndpoint="ip--172--31--23--100-k8s-coredns--674b8bbfcf--dstgq-eth0" Jan 23 23:35:10.269008 containerd[1972]: 2026-01-23 23:35:10.217 [INFO][5374] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf" Namespace="kube-system" Pod="coredns-674b8bbfcf-dstgq" WorkloadEndpoint="ip--172--31--23--100-k8s-coredns--674b8bbfcf--dstgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--100-k8s-coredns--674b8bbfcf--dstgq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8786d69c-e728-4edd-874c-d71c00ce627e", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 34, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-100", ContainerID:"1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf", Pod:"coredns-674b8bbfcf-dstgq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84dccfc9666", MAC:"1a:a9:8c:3d:b3:c5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:35:10.269008 containerd[1972]: 2026-01-23 23:35:10.255 [INFO][5374] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf" Namespace="kube-system" Pod="coredns-674b8bbfcf-dstgq" WorkloadEndpoint="ip--172--31--23--100-k8s-coredns--674b8bbfcf--dstgq-eth0" Jan 23 23:35:10.308000 audit[5459]: NETFILTER_CFG table=filter:141 family=2 entries=48 op=nft_register_chain pid=5459 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 23:35:10.308000 audit[5459]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26388 a0=3 a1=ffffd2cd4a00 a2=0 a3=ffffa00abfa8 items=0 ppid=4633 pid=5459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:10.308000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 23:35:10.346000 audit: BPF prog-id=249 op=LOAD Jan 23 23:35:10.349000 audit: BPF prog-id=250 op=LOAD Jan 23 23:35:10.349000 audit[5439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5427 pid=5439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:10.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337616663313934646261643763663231346137303761613432393531 Jan 23 23:35:10.350000 audit: BPF prog-id=250 op=UNLOAD Jan 23 23:35:10.350000 audit[5439]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5427 pid=5439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:10.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337616663313934646261643763663231346137303761613432393531 Jan 23 23:35:10.372012 containerd[1972]: time="2026-01-23T23:35:10.370347097Z" level=info msg="connecting to shim 1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf" address="unix:///run/containerd/s/903d453cb48051b517c602262c9a2e4f125826b18b4be7bdb23b7ef477127dda" namespace=k8s.io protocol=ttrpc version=3 Jan 23 23:35:10.376000 audit: BPF prog-id=251 op=LOAD Jan 23 23:35:10.376000 audit[5439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5427 pid=5439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:10.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337616663313934646261643763663231346137303761613432393531 Jan 23 23:35:10.380000 audit: BPF prog-id=252 op=LOAD Jan 23 23:35:10.380000 audit[5439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=5427 pid=5439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:10.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337616663313934646261643763663231346137303761613432393531 Jan 23 23:35:10.382000 audit: BPF prog-id=252 op=UNLOAD Jan 23 23:35:10.382000 audit[5439]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5427 pid=5439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:10.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337616663313934646261643763663231346137303761613432393531 Jan 23 23:35:10.383000 audit: BPF prog-id=251 op=UNLOAD Jan 23 23:35:10.383000 audit[5439]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5427 pid=5439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:10.383000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337616663313934646261643763663231346137303761613432393531 Jan 23 23:35:10.384000 audit: BPF prog-id=253 op=LOAD Jan 23 23:35:10.384000 audit[5439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=5427 pid=5439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:10.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337616663313934646261643763663231346137303761613432393531 Jan 23 23:35:10.468331 systemd[1]: Started cri-containerd-1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf.scope - libcontainer container 1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf. Jan 23 23:35:10.527000 audit: BPF prog-id=254 op=LOAD Jan 23 23:35:10.528000 audit: BPF prog-id=255 op=LOAD Jan 23 23:35:10.528000 audit[5488]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5474 pid=5488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:10.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163353634366133623133646261326265303835343430626536373436 Jan 23 23:35:10.528000 audit: BPF prog-id=255 op=UNLOAD Jan 23 23:35:10.528000 audit[5488]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5474 pid=5488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:10.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163353634366133623133646261326265303835343430626536373436 Jan 23 23:35:10.531000 audit: BPF prog-id=256 op=LOAD Jan 23 23:35:10.531000 audit[5488]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5474 pid=5488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:10.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163353634366133623133646261326265303835343430626536373436 Jan 23 23:35:10.533000 audit: BPF prog-id=257 op=LOAD Jan 23 23:35:10.533000 audit[5488]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5474 pid=5488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:10.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163353634366133623133646261326265303835343430626536373436 Jan 23 23:35:10.534000 audit: BPF prog-id=257 op=UNLOAD Jan 23 23:35:10.534000 audit[5488]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5474 pid=5488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:10.534000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163353634366133623133646261326265303835343430626536373436 Jan 23 23:35:10.534000 audit: BPF prog-id=256 op=UNLOAD Jan 23 23:35:10.534000 audit[5488]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5474 pid=5488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:10.534000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163353634366133623133646261326265303835343430626536373436 Jan 23 23:35:10.536000 audit: BPF prog-id=258 op=LOAD Jan 23 23:35:10.536000 audit[5488]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5474 pid=5488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:10.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163353634366133623133646261326265303835343430626536373436 Jan 23 23:35:10.539000 audit[5504]: NETFILTER_CFG table=filter:142 family=2 entries=52 op=nft_register_chain pid=5504 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 23:35:10.539000 audit[5504]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23892 a0=3 a1=ffffd6e48390 a2=0 a3=ffff94f27fa8 items=0 ppid=4633 pid=5504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:10.539000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 23:35:10.556395 containerd[1972]: time="2026-01-23T23:35:10.554307398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vqjnv,Uid:0699514f-51e2-4aa1-86de-4ee590fe63e1,Namespace:calico-system,Attempt:0,} returns sandbox id \"c7afc194dbad7cf214a707aa4295180422e84f0b557e18711ad05107c54bf912\"" Jan 23 23:35:10.567705 containerd[1972]: time="2026-01-23T23:35:10.567659834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:35:10.605137 systemd-networkd[1781]: calie55e9b51401: Gained IPv6LL Jan 23 23:35:10.659003 containerd[1972]: time="2026-01-23T23:35:10.658875315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dstgq,Uid:8786d69c-e728-4edd-874c-d71c00ce627e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf\"" Jan 23 23:35:10.669480 containerd[1972]: time="2026-01-23T23:35:10.669400335Z" level=info msg="CreateContainer within sandbox \"1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:35:10.721045 containerd[1972]: time="2026-01-23T23:35:10.718817091Z" level=info msg="Container bff88a9463277a49c465da91964d47e4744ba89c884e325f0366bb8da4d54ea5: CDI devices from CRI Config.CDIDevices: []" Jan 23 23:35:10.735993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3423499144.mount: Deactivated successfully. Jan 23 23:35:10.741216 containerd[1972]: time="2026-01-23T23:35:10.740685651Z" level=info msg="CreateContainer within sandbox \"1c5646a3b13dba2be085440be674611dbe980b80c736aa0503df272fc3d467bf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bff88a9463277a49c465da91964d47e4744ba89c884e325f0366bb8da4d54ea5\"" Jan 23 23:35:10.743579 containerd[1972]: time="2026-01-23T23:35:10.743531811Z" level=info msg="StartContainer for \"bff88a9463277a49c465da91964d47e4744ba89c884e325f0366bb8da4d54ea5\"" Jan 23 23:35:10.747505 containerd[1972]: time="2026-01-23T23:35:10.747273567Z" level=info msg="connecting to shim bff88a9463277a49c465da91964d47e4744ba89c884e325f0366bb8da4d54ea5" address="unix:///run/containerd/s/903d453cb48051b517c602262c9a2e4f125826b18b4be7bdb23b7ef477127dda" protocol=ttrpc version=3 Jan 23 23:35:10.809399 systemd[1]: Started cri-containerd-bff88a9463277a49c465da91964d47e4744ba89c884e325f0366bb8da4d54ea5.scope - libcontainer container bff88a9463277a49c465da91964d47e4744ba89c884e325f0366bb8da4d54ea5. Jan 23 23:35:10.848000 audit: BPF prog-id=259 op=LOAD Jan 23 23:35:10.850000 audit: BPF prog-id=260 op=LOAD Jan 23 23:35:10.850000 audit[5521]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=5474 pid=5521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:10.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266663838613934363332373761343963343635646139313936346434 Jan 23 23:35:10.851000 audit: BPF prog-id=260 op=UNLOAD Jan 23 23:35:10.851000 audit[5521]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5474 pid=5521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:10.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266663838613934363332373761343963343635646139313936346434 Jan 23 23:35:10.851000 audit: BPF prog-id=261 op=LOAD Jan 23 23:35:10.851000 audit[5521]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=5474 pid=5521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:10.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266663838613934363332373761343963343635646139313936346434 Jan 23 23:35:10.852000 audit: BPF prog-id=262 op=LOAD Jan 23 23:35:10.852000 audit[5521]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=5474 pid=5521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:10.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266663838613934363332373761343963343635646139313936346434 Jan 23 23:35:10.852000 audit: BPF prog-id=262 op=UNLOAD Jan 23 23:35:10.852000 audit[5521]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5474 pid=5521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:10.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266663838613934363332373761343963343635646139313936346434 Jan 23 23:35:10.852000 audit: BPF prog-id=261 op=UNLOAD Jan 23 23:35:10.852000 audit[5521]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5474 pid=5521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:10.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266663838613934363332373761343963343635646139313936346434 Jan 23 23:35:10.852000 audit: BPF prog-id=263 op=LOAD Jan 23 23:35:10.852000 audit[5521]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=5474 pid=5521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:10.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266663838613934363332373761343963343635646139313936346434 Jan 23 23:35:10.860180 containerd[1972]: time="2026-01-23T23:35:10.860116888Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:10.862955 containerd[1972]: time="2026-01-23T23:35:10.862784020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:10.863235 containerd[1972]: time="2026-01-23T23:35:10.863104132Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:35:10.864366 kubelet[3413]: E0123 23:35:10.864220 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:35:10.864943 kubelet[3413]: E0123 23:35:10.864628 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:35:10.865532 kubelet[3413]: E0123 23:35:10.865190 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sb9zm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vqjnv_calico-system(0699514f-51e2-4aa1-86de-4ee590fe63e1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:10.870166 kubelet[3413]: E0123 23:35:10.870016 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vqjnv" podUID="0699514f-51e2-4aa1-86de-4ee590fe63e1" Jan 23 23:35:10.951966 containerd[1972]: time="2026-01-23T23:35:10.951855064Z" level=info msg="StartContainer for \"bff88a9463277a49c465da91964d47e4744ba89c884e325f0366bb8da4d54ea5\" returns successfully" Jan 23 23:35:11.145447 kubelet[3413]: E0123 23:35:11.145286 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vqjnv" podUID="0699514f-51e2-4aa1-86de-4ee590fe63e1" Jan 23 23:35:11.224624 kubelet[3413]: I0123 23:35:11.224486 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dstgq" podStartSLOduration=55.224464046 podStartE2EDuration="55.224464046s" podCreationTimestamp="2026-01-23 23:34:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:35:11.184602038 +0000 UTC m=+60.844563820" watchObservedRunningTime="2026-01-23 23:35:11.224464046 +0000 UTC m=+60.884425828" Jan 23 23:35:11.241000 audit[5558]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5558 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:35:11.244381 kernel: kauditd_printk_skb: 258 callbacks suppressed Jan 23 23:35:11.244457 kernel: audit: type=1325 audit(1769211311.241:753): table=filter:143 family=2 entries=14 op=nft_register_rule pid=5558 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:35:11.248619 systemd-networkd[1781]: cali84dccfc9666: Gained IPv6LL Jan 23 23:35:11.241000 audit[5558]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdbbf0ac0 a2=0 a3=1 items=0 ppid=3525 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:11.260716 kernel: audit: type=1300 audit(1769211311.241:753): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdbbf0ac0 a2=0 a3=1 items=0 ppid=3525 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:11.241000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:35:11.266725 kernel: audit: type=1327 audit(1769211311.241:753): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:35:11.251000 audit[5558]: NETFILTER_CFG table=nat:144 family=2 entries=44 op=nft_register_rule pid=5558 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:35:11.272952 kernel: audit: type=1325 audit(1769211311.251:754): table=nat:144 family=2 entries=44 op=nft_register_rule pid=5558 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:35:11.273076 kernel: audit: type=1300 audit(1769211311.251:754): arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffdbbf0ac0 a2=0 a3=1 items=0 ppid=3525 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:11.251000 audit[5558]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffdbbf0ac0 a2=0 a3=1 items=0 ppid=3525 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:11.251000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:35:11.288124 kernel: audit: type=1327 audit(1769211311.251:754): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:35:11.343000 audit[5560]: NETFILTER_CFG table=filter:145 family=2 entries=14 op=nft_register_rule pid=5560 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:35:11.343000 audit[5560]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff987a530 a2=0 a3=1 items=0 ppid=3525 pid=5560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:11.356645 kernel: audit: type=1325 audit(1769211311.343:755): table=filter:145 family=2 entries=14 op=nft_register_rule pid=5560 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:35:11.356787 kernel: audit: type=1300 audit(1769211311.343:755): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff987a530 a2=0 a3=1 items=0 ppid=3525 pid=5560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:11.343000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:35:11.362296 kernel: audit: type=1327 audit(1769211311.343:755): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:35:11.391000 audit[5560]: NETFILTER_CFG table=nat:146 family=2 entries=56 op=nft_register_chain pid=5560 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:35:11.406999 kernel: audit: type=1325 audit(1769211311.391:756): table=nat:146 family=2 entries=56 op=nft_register_chain pid=5560 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:35:11.391000 audit[5560]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=fffff987a530 a2=0 a3=1 items=0 ppid=3525 pid=5560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:11.391000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:35:11.821294 systemd-networkd[1781]: cali65f47a48f1f: Gained IPv6LL Jan 23 23:35:12.150008 kubelet[3413]: E0123 23:35:12.149243 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vqjnv" podUID="0699514f-51e2-4aa1-86de-4ee590fe63e1" Jan 23 23:35:13.892223 ntpd[1928]: Listen normally on 6 vxlan.calico 192.168.22.0:123 Jan 23 23:35:13.892956 ntpd[1928]: 23 Jan 23:35:13 ntpd[1928]: Listen normally on 6 vxlan.calico 192.168.22.0:123 Jan 23 23:35:13.892956 ntpd[1928]: 23 Jan 23:35:13 ntpd[1928]: Listen normally on 7 caliaa217f3c85e [fe80::ecee:eeff:feee:eeee%4]:123 Jan 23 23:35:13.892304 ntpd[1928]: Listen normally on 7 caliaa217f3c85e [fe80::ecee:eeff:feee:eeee%4]:123 Jan 23 23:35:13.893166 ntpd[1928]: 23 Jan 23:35:13 ntpd[1928]: Listen normally on 8 vxlan.calico [fe80::6470:d8ff:fe43:d56f%5]:123 Jan 23 23:35:13.893166 ntpd[1928]: 23 Jan 23:35:13 ntpd[1928]: Listen normally on 9 calibbb2ba1024b [fe80::ecee:eeff:feee:eeee%8]:123 Jan 23 23:35:13.893166 ntpd[1928]: 23 Jan 23:35:13 ntpd[1928]: Listen normally on 10 cali65be505e4e4 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 23 23:35:13.892983 ntpd[1928]: Listen normally on 8 vxlan.calico [fe80::6470:d8ff:fe43:d56f%5]:123 Jan 23 23:35:13.894084 ntpd[1928]: 23 Jan 23:35:13 ntpd[1928]: Listen normally on 11 califf33d86163e [fe80::ecee:eeff:feee:eeee%10]:123 Jan 23 23:35:13.894084 ntpd[1928]: 23 Jan 23:35:13 ntpd[1928]: Listen normally on 12 cali2e8b0de17fc [fe80::ecee:eeff:feee:eeee%11]:123 Jan 23 23:35:13.894084 ntpd[1928]: 23 Jan 23:35:13 ntpd[1928]: Listen normally on 13 cali65c71e3294b [fe80::ecee:eeff:feee:eeee%12]:123 Jan 23 23:35:13.894084 ntpd[1928]: 23 Jan 23:35:13 ntpd[1928]: Listen normally on 14 calie55e9b51401 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 23 23:35:13.894084 ntpd[1928]: 23 Jan 23:35:13 ntpd[1928]: Listen normally on 15 cali65f47a48f1f [fe80::ecee:eeff:feee:eeee%14]:123 Jan 23 23:35:13.894084 ntpd[1928]: 23 Jan 23:35:13 ntpd[1928]: Listen normally on 16 cali84dccfc9666 [fe80::ecee:eeff:feee:eeee%15]:123 Jan 23 23:35:13.893050 ntpd[1928]: Listen normally on 9 calibbb2ba1024b [fe80::ecee:eeff:feee:eeee%8]:123 Jan 23 23:35:13.893121 ntpd[1928]: Listen normally on 10 cali65be505e4e4 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 23 23:35:13.893169 ntpd[1928]: Listen normally on 11 califf33d86163e [fe80::ecee:eeff:feee:eeee%10]:123 Jan 23 23:35:13.893215 ntpd[1928]: Listen normally on 12 cali2e8b0de17fc [fe80::ecee:eeff:feee:eeee%11]:123 Jan 23 23:35:13.893602 ntpd[1928]: Listen normally on 13 cali65c71e3294b [fe80::ecee:eeff:feee:eeee%12]:123 Jan 23 23:35:13.893662 ntpd[1928]: Listen normally on 14 calie55e9b51401 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 23 23:35:13.893707 ntpd[1928]: Listen normally on 15 cali65f47a48f1f [fe80::ecee:eeff:feee:eeee%14]:123 Jan 23 23:35:13.893756 ntpd[1928]: Listen normally on 16 cali84dccfc9666 [fe80::ecee:eeff:feee:eeee%15]:123 Jan 23 23:35:15.608225 containerd[1972]: time="2026-01-23T23:35:15.608081983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:35:15.877067 containerd[1972]: time="2026-01-23T23:35:15.876206385Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:15.878502 containerd[1972]: time="2026-01-23T23:35:15.878431773Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:35:15.878618 containerd[1972]: time="2026-01-23T23:35:15.878561109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:15.878866 kubelet[3413]: E0123 23:35:15.878822 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:35:15.879830 kubelet[3413]: E0123 23:35:15.879434 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:35:15.879830 kubelet[3413]: E0123 23:35:15.879757 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b96f1a59a5d8446cad0dc01e07796dc4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-krc7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f9b44c667-h6s7s_calico-system(24e24657-8e54-4ae7-acdb-2eda45aabbdf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:15.884218 containerd[1972]: time="2026-01-23T23:35:15.884156241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:35:16.144825 containerd[1972]: time="2026-01-23T23:35:16.144520878Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:16.147718 containerd[1972]: time="2026-01-23T23:35:16.147578130Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:35:16.147718 containerd[1972]: time="2026-01-23T23:35:16.147648822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:16.148109 kubelet[3413]: E0123 23:35:16.148050 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:35:16.148229 kubelet[3413]: E0123 23:35:16.148119 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:35:16.149709 kubelet[3413]: E0123 23:35:16.149562 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-krc7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f9b44c667-h6s7s_calico-system(24e24657-8e54-4ae7-acdb-2eda45aabbdf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:16.152328 kubelet[3413]: E0123 23:35:16.150993 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f9b44c667-h6s7s" podUID="24e24657-8e54-4ae7-acdb-2eda45aabbdf" Jan 23 23:35:20.610251 containerd[1972]: time="2026-01-23T23:35:20.610171008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:35:20.847793 containerd[1972]: time="2026-01-23T23:35:20.847580642Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:20.849813 containerd[1972]: time="2026-01-23T23:35:20.849769250Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:35:20.850117 containerd[1972]: time="2026-01-23T23:35:20.849855350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:20.850446 kubelet[3413]: E0123 23:35:20.850400 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:35:20.851259 kubelet[3413]: E0123 23:35:20.850998 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:35:20.851889 kubelet[3413]: E0123 23:35:20.851681 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pv8tg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68b9c97bcf-2xtt8_calico-apiserver(5be57303-da73-45f8-8222-a093d6ce8129): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:20.853583 containerd[1972]: time="2026-01-23T23:35:20.851997410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:35:20.853825 kubelet[3413]: E0123 23:35:20.853629 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b9c97bcf-2xtt8" podUID="5be57303-da73-45f8-8222-a093d6ce8129" Jan 23 23:35:21.119430 containerd[1972]: time="2026-01-23T23:35:21.119367875Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:21.121629 containerd[1972]: time="2026-01-23T23:35:21.121556459Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:35:21.121629 containerd[1972]: time="2026-01-23T23:35:21.121586315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:21.122940 kubelet[3413]: E0123 23:35:21.122654 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:35:21.122940 kubelet[3413]: E0123 23:35:21.122746 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:35:21.123144 kubelet[3413]: E0123 23:35:21.123021 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wlxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68b9c97bcf-96pwk_calico-apiserver(1b0b2743-62ac-460d-ba7c-52d229e3b875): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:21.125139 kubelet[3413]: E0123 23:35:21.125060 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b9c97bcf-96pwk" podUID="1b0b2743-62ac-460d-ba7c-52d229e3b875" Jan 23 23:35:21.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.23.100:22-20.161.92.111:38524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:21.767865 systemd[1]: Started sshd@7-172.31.23.100:22-20.161.92.111:38524.service - OpenSSH per-connection server daemon (20.161.92.111:38524). Jan 23 23:35:21.769367 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 23 23:35:21.769414 kernel: audit: type=1130 audit(1769211321.767:757): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.23.100:22-20.161.92.111:38524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:22.236000 audit[5581]: USER_ACCT pid=5581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:22.237611 sshd[5581]: Accepted publickey for core from 20.161.92.111 port 38524 ssh2: RSA SHA256:ypdhUqoTY06pBP3UvLlAm2LjmautYebA9jgvJDmvCzY Jan 23 23:35:22.244959 kernel: audit: type=1101 audit(1769211322.236:758): pid=5581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:22.244000 audit[5581]: CRED_ACQ pid=5581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:22.246637 sshd-session[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:35:22.254669 kernel: audit: type=1103 audit(1769211322.244:759): pid=5581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:22.254776 kernel: audit: type=1006 audit(1769211322.244:760): pid=5581 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 23 23:35:22.255081 kernel: audit: type=1300 audit(1769211322.244:760): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffead1ab50 a2=3 a3=0 items=0 ppid=1 pid=5581 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:22.244000 audit[5581]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffead1ab50 a2=3 a3=0 items=0 ppid=1 pid=5581 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:22.244000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:35:22.264020 kernel: audit: type=1327 audit(1769211322.244:760): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:35:22.267523 systemd-logind[1934]: New session 9 of user core. Jan 23 23:35:22.273520 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 23:35:22.281000 audit[5581]: USER_START pid=5581 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:22.289000 audit[5585]: CRED_ACQ pid=5585 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:22.296282 kernel: audit: type=1105 audit(1769211322.281:761): pid=5581 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:22.296351 kernel: audit: type=1103 audit(1769211322.289:762): pid=5585 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:22.609177 containerd[1972]: time="2026-01-23T23:35:22.608825258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:35:22.619553 sshd[5585]: Connection closed by 20.161.92.111 port 38524 Jan 23 23:35:22.620551 sshd-session[5581]: pam_unix(sshd:session): session closed for user core Jan 23 23:35:22.624000 audit[5581]: USER_END pid=5581 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:22.624000 audit[5581]: CRED_DISP pid=5581 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:22.641175 kernel: audit: type=1106 audit(1769211322.624:763): pid=5581 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:22.641236 kernel: audit: type=1104 audit(1769211322.624:764): pid=5581 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:22.633675 systemd[1]: sshd@7-172.31.23.100:22-20.161.92.111:38524.service: Deactivated successfully. Jan 23 23:35:22.638937 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 23:35:22.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.23.100:22-20.161.92.111:38524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:22.648683 systemd-logind[1934]: Session 9 logged out. Waiting for processes to exit. Jan 23 23:35:22.653108 systemd-logind[1934]: Removed session 9. Jan 23 23:35:22.917450 containerd[1972]: time="2026-01-23T23:35:22.917305720Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:22.920426 containerd[1972]: time="2026-01-23T23:35:22.920232208Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:35:22.920426 containerd[1972]: time="2026-01-23T23:35:22.920346616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:22.920620 kubelet[3413]: E0123 23:35:22.920575 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:35:22.921428 kubelet[3413]: E0123 23:35:22.920634 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:35:22.921428 kubelet[3413]: E0123 23:35:22.920822 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82mcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-778cf5d48d-s7h4h_calico-apiserver(64fe3344-4e80-444f-ba70-e34e02720a15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:22.922497 kubelet[3413]: E0123 23:35:22.922407 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-778cf5d48d-s7h4h" podUID="64fe3344-4e80-444f-ba70-e34e02720a15" Jan 23 23:35:23.608785 containerd[1972]: time="2026-01-23T23:35:23.607361367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:35:23.888724 containerd[1972]: time="2026-01-23T23:35:23.888434633Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:23.890729 containerd[1972]: time="2026-01-23T23:35:23.890580449Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:35:23.890729 containerd[1972]: time="2026-01-23T23:35:23.890595941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:23.891163 kubelet[3413]: E0123 23:35:23.891119 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:35:23.892667 kubelet[3413]: E0123 23:35:23.891480 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:35:23.892667 kubelet[3413]: E0123 23:35:23.891843 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sb9zm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vqjnv_calico-system(0699514f-51e2-4aa1-86de-4ee590fe63e1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:23.893007 containerd[1972]: time="2026-01-23T23:35:23.892069565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:35:23.893722 kubelet[3413]: E0123 23:35:23.893577 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vqjnv" podUID="0699514f-51e2-4aa1-86de-4ee590fe63e1" Jan 23 23:35:24.134861 containerd[1972]: time="2026-01-23T23:35:24.134798594Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:24.137178 containerd[1972]: time="2026-01-23T23:35:24.137086022Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:35:24.137789 containerd[1972]: time="2026-01-23T23:35:24.137145410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:24.137897 kubelet[3413]: E0123 23:35:24.137543 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:35:24.137897 kubelet[3413]: E0123 23:35:24.137601 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:35:24.138485 kubelet[3413]: E0123 23:35:24.137974 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4q2h7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-57b65b9-26wjv_calico-system(6c551026-ffac-43ea-999f-0823acd8fbb1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:24.139709 containerd[1972]: time="2026-01-23T23:35:24.138984014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:35:24.139828 kubelet[3413]: E0123 23:35:24.139383 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57b65b9-26wjv" podUID="6c551026-ffac-43ea-999f-0823acd8fbb1" Jan 23 23:35:24.401967 containerd[1972]: time="2026-01-23T23:35:24.401788131Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:24.404058 containerd[1972]: time="2026-01-23T23:35:24.403974675Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:35:24.404156 containerd[1972]: time="2026-01-23T23:35:24.404097195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:24.404361 kubelet[3413]: E0123 23:35:24.404307 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:35:24.404437 kubelet[3413]: E0123 23:35:24.404373 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:35:24.405269 kubelet[3413]: E0123 23:35:24.405026 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqqlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-cvgjz_calico-system(bc435867-361d-4b3f-a3e1-96c440fc0a66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:24.409940 containerd[1972]: time="2026-01-23T23:35:24.409755579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:35:24.808242 containerd[1972]: time="2026-01-23T23:35:24.808033229Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:24.810259 containerd[1972]: time="2026-01-23T23:35:24.810212357Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:35:24.810658 containerd[1972]: time="2026-01-23T23:35:24.810281321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:24.810783 kubelet[3413]: E0123 23:35:24.810699 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:35:24.810856 kubelet[3413]: E0123 23:35:24.810760 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:35:24.811045 kubelet[3413]: E0123 23:35:24.810967 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqqlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-cvgjz_calico-system(bc435867-361d-4b3f-a3e1-96c440fc0a66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:24.813058 kubelet[3413]: E0123 23:35:24.812756 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cvgjz" podUID="bc435867-361d-4b3f-a3e1-96c440fc0a66" Jan 23 23:35:27.712701 systemd[1]: Started sshd@8-172.31.23.100:22-20.161.92.111:51766.service - OpenSSH per-connection server daemon (20.161.92.111:51766). Jan 23 23:35:27.720376 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 23:35:27.720430 kernel: audit: type=1130 audit(1769211327.712:766): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.23.100:22-20.161.92.111:51766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:27.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.23.100:22-20.161.92.111:51766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:28.165000 audit[5605]: USER_ACCT pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:28.168197 sshd[5605]: Accepted publickey for core from 20.161.92.111 port 51766 ssh2: RSA SHA256:ypdhUqoTY06pBP3UvLlAm2LjmautYebA9jgvJDmvCzY Jan 23 23:35:28.174955 kernel: audit: type=1101 audit(1769211328.165:767): pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:28.175069 kernel: audit: type=1103 audit(1769211328.173:768): pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:28.173000 audit[5605]: CRED_ACQ pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:28.176508 sshd-session[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:35:28.184621 kernel: audit: type=1006 audit(1769211328.173:769): pid=5605 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 23 23:35:28.173000 audit[5605]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd463a450 a2=3 a3=0 items=0 ppid=1 pid=5605 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:28.191123 kernel: audit: type=1300 audit(1769211328.173:769): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd463a450 a2=3 a3=0 items=0 ppid=1 pid=5605 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:28.194848 kernel: audit: type=1327 audit(1769211328.173:769): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:35:28.173000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:35:28.200001 systemd-logind[1934]: New session 10 of user core. Jan 23 23:35:28.206225 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 23:35:28.211000 audit[5605]: USER_START pid=5605 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:28.219980 kernel: audit: type=1105 audit(1769211328.211:770): pid=5605 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:28.219000 audit[5609]: CRED_ACQ pid=5609 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:28.226989 kernel: audit: type=1103 audit(1769211328.219:771): pid=5609 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:28.543363 sshd[5609]: Connection closed by 20.161.92.111 port 51766 Jan 23 23:35:28.544204 sshd-session[5605]: pam_unix(sshd:session): session closed for user core Jan 23 23:35:28.547000 audit[5605]: USER_END pid=5605 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:28.547000 audit[5605]: CRED_DISP pid=5605 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:28.561593 kernel: audit: type=1106 audit(1769211328.547:772): pid=5605 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:28.561688 kernel: audit: type=1104 audit(1769211328.547:773): pid=5605 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:28.561941 systemd[1]: sshd@8-172.31.23.100:22-20.161.92.111:51766.service: Deactivated successfully. Jan 23 23:35:28.565381 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 23:35:28.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.23.100:22-20.161.92.111:51766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:28.570679 systemd-logind[1934]: Session 10 logged out. Waiting for processes to exit. Jan 23 23:35:28.572780 systemd-logind[1934]: Removed session 10. Jan 23 23:35:28.614260 kubelet[3413]: E0123 23:35:28.613686 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f9b44c667-h6s7s" podUID="24e24657-8e54-4ae7-acdb-2eda45aabbdf" Jan 23 23:35:32.610266 kubelet[3413]: E0123 23:35:32.610208 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b9c97bcf-96pwk" podUID="1b0b2743-62ac-460d-ba7c-52d229e3b875" Jan 23 23:35:33.648334 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 23:35:33.648475 kernel: audit: type=1130 audit(1769211333.638:775): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.23.100:22-20.161.92.111:58772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:33.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.23.100:22-20.161.92.111:58772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:33.639567 systemd[1]: Started sshd@9-172.31.23.100:22-20.161.92.111:58772.service - OpenSSH per-connection server daemon (20.161.92.111:58772). Jan 23 23:35:34.102000 audit[5650]: USER_ACCT pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:34.109851 sshd[5650]: Accepted publickey for core from 20.161.92.111 port 58772 ssh2: RSA SHA256:ypdhUqoTY06pBP3UvLlAm2LjmautYebA9jgvJDmvCzY Jan 23 23:35:34.110995 kernel: audit: type=1101 audit(1769211334.102:776): pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:34.110000 audit[5650]: CRED_ACQ pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:34.113204 sshd-session[5650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:35:34.121395 kernel: audit: type=1103 audit(1769211334.110:777): pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:34.121523 kernel: audit: type=1006 audit(1769211334.110:778): pid=5650 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 23 23:35:34.110000 audit[5650]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffda194c50 a2=3 a3=0 items=0 ppid=1 pid=5650 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:34.128514 kernel: audit: type=1300 audit(1769211334.110:778): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffda194c50 a2=3 a3=0 items=0 ppid=1 pid=5650 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:34.110000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:35:34.131315 kernel: audit: type=1327 audit(1769211334.110:778): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:35:34.138172 systemd-logind[1934]: New session 11 of user core. Jan 23 23:35:34.146210 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 23:35:34.181000 audit[5650]: USER_START pid=5650 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:34.181000 audit[5654]: CRED_ACQ pid=5654 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:34.195201 kernel: audit: type=1105 audit(1769211334.181:779): pid=5650 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:34.195299 kernel: audit: type=1103 audit(1769211334.181:780): pid=5654 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:34.480969 sshd[5654]: Connection closed by 20.161.92.111 port 58772 Jan 23 23:35:34.481895 sshd-session[5650]: pam_unix(sshd:session): session closed for user core Jan 23 23:35:34.484000 audit[5650]: USER_END pid=5650 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:34.493474 systemd[1]: sshd@9-172.31.23.100:22-20.161.92.111:58772.service: Deactivated successfully. Jan 23 23:35:34.494942 kernel: audit: type=1106 audit(1769211334.484:781): pid=5650 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:34.484000 audit[5650]: CRED_DISP pid=5650 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:34.500184 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 23:35:34.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.23.100:22-20.161.92.111:58772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:34.503623 kernel: audit: type=1104 audit(1769211334.484:782): pid=5650 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:34.505409 systemd-logind[1934]: Session 11 logged out. Waiting for processes to exit. Jan 23 23:35:34.507216 systemd-logind[1934]: Removed session 11. Jan 23 23:35:34.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.23.100:22-20.161.92.111:58782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:34.572270 systemd[1]: Started sshd@10-172.31.23.100:22-20.161.92.111:58782.service - OpenSSH per-connection server daemon (20.161.92.111:58782). Jan 23 23:35:34.617023 kubelet[3413]: E0123 23:35:34.616726 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b9c97bcf-2xtt8" podUID="5be57303-da73-45f8-8222-a093d6ce8129" Jan 23 23:35:35.063000 audit[5667]: USER_ACCT pid=5667 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:35.064510 sshd[5667]: Accepted publickey for core from 20.161.92.111 port 58782 ssh2: RSA SHA256:ypdhUqoTY06pBP3UvLlAm2LjmautYebA9jgvJDmvCzY Jan 23 23:35:35.066000 audit[5667]: CRED_ACQ pid=5667 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:35.067000 audit[5667]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc6f54e00 a2=3 a3=0 items=0 ppid=1 pid=5667 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:35.067000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:35:35.070219 sshd-session[5667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:35:35.083006 systemd-logind[1934]: New session 12 of user core. Jan 23 23:35:35.094346 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 23:35:35.100000 audit[5667]: USER_START pid=5667 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:35.103000 audit[5671]: CRED_ACQ pid=5671 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:35.512387 sshd[5671]: Connection closed by 20.161.92.111 port 58782 Jan 23 23:35:35.515245 sshd-session[5667]: pam_unix(sshd:session): session closed for user core Jan 23 23:35:35.518000 audit[5667]: USER_END pid=5667 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:35.518000 audit[5667]: CRED_DISP pid=5667 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:35.526010 systemd-logind[1934]: Session 12 logged out. Waiting for processes to exit. Jan 23 23:35:35.527571 systemd[1]: sshd@10-172.31.23.100:22-20.161.92.111:58782.service: Deactivated successfully. Jan 23 23:35:35.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.23.100:22-20.161.92.111:58782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:35.536831 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 23:35:35.545024 systemd-logind[1934]: Removed session 12. Jan 23 23:35:35.611692 systemd[1]: Started sshd@11-172.31.23.100:22-20.161.92.111:58790.service - OpenSSH per-connection server daemon (20.161.92.111:58790). Jan 23 23:35:35.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.23.100:22-20.161.92.111:58790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:36.085000 audit[5680]: USER_ACCT pid=5680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:36.087017 sshd[5680]: Accepted publickey for core from 20.161.92.111 port 58790 ssh2: RSA SHA256:ypdhUqoTY06pBP3UvLlAm2LjmautYebA9jgvJDmvCzY Jan 23 23:35:36.087000 audit[5680]: CRED_ACQ pid=5680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:36.087000 audit[5680]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffec6b5220 a2=3 a3=0 items=0 ppid=1 pid=5680 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:36.087000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:35:36.090772 sshd-session[5680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:35:36.100019 systemd-logind[1934]: New session 13 of user core. Jan 23 23:35:36.109199 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 23:35:36.115000 audit[5680]: USER_START pid=5680 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:36.118000 audit[5684]: CRED_ACQ pid=5684 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:36.452519 sshd[5684]: Connection closed by 20.161.92.111 port 58790 Jan 23 23:35:36.453380 sshd-session[5680]: pam_unix(sshd:session): session closed for user core Jan 23 23:35:36.455000 audit[5680]: USER_END pid=5680 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:36.455000 audit[5680]: CRED_DISP pid=5680 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:36.461893 systemd[1]: sshd@11-172.31.23.100:22-20.161.92.111:58790.service: Deactivated successfully. Jan 23 23:35:36.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.23.100:22-20.161.92.111:58790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:36.467656 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 23:35:36.471276 systemd-logind[1934]: Session 13 logged out. Waiting for processes to exit. Jan 23 23:35:36.474435 systemd-logind[1934]: Removed session 13. Jan 23 23:35:36.613114 kubelet[3413]: E0123 23:35:36.612961 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vqjnv" podUID="0699514f-51e2-4aa1-86de-4ee590fe63e1" Jan 23 23:35:36.615851 kubelet[3413]: E0123 23:35:36.615719 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cvgjz" podUID="bc435867-361d-4b3f-a3e1-96c440fc0a66" Jan 23 23:35:37.607707 kubelet[3413]: E0123 23:35:37.607605 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-778cf5d48d-s7h4h" podUID="64fe3344-4e80-444f-ba70-e34e02720a15" Jan 23 23:35:37.609710 kubelet[3413]: E0123 23:35:37.609331 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57b65b9-26wjv" podUID="6c551026-ffac-43ea-999f-0823acd8fbb1" Jan 23 23:35:41.563407 systemd[1]: Started sshd@12-172.31.23.100:22-20.161.92.111:58796.service - OpenSSH per-connection server daemon (20.161.92.111:58796). Jan 23 23:35:41.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.23.100:22-20.161.92.111:58796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:41.565265 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 23 23:35:41.565681 kernel: audit: type=1130 audit(1769211341.562:802): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.23.100:22-20.161.92.111:58796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:42.079000 audit[5698]: USER_ACCT pid=5698 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:42.088699 sshd[5698]: Accepted publickey for core from 20.161.92.111 port 58796 ssh2: RSA SHA256:ypdhUqoTY06pBP3UvLlAm2LjmautYebA9jgvJDmvCzY Jan 23 23:35:42.091541 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:35:42.087000 audit[5698]: CRED_ACQ pid=5698 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:42.099417 kernel: audit: type=1101 audit(1769211342.079:803): pid=5698 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:42.099471 kernel: audit: type=1103 audit(1769211342.087:804): pid=5698 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:42.107729 kernel: audit: type=1006 audit(1769211342.087:805): pid=5698 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 23 23:35:42.087000 audit[5698]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdd2dcf60 a2=3 a3=0 items=0 ppid=1 pid=5698 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:42.118121 kernel: audit: type=1300 audit(1769211342.087:805): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdd2dcf60 a2=3 a3=0 items=0 ppid=1 pid=5698 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:42.087000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:35:42.121290 kernel: audit: type=1327 audit(1769211342.087:805): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:35:42.123967 systemd-logind[1934]: New session 14 of user core. Jan 23 23:35:42.128278 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 23:35:42.139000 audit[5698]: USER_START pid=5698 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:42.148000 audit[5705]: CRED_ACQ pid=5705 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:42.155184 kernel: audit: type=1105 audit(1769211342.139:806): pid=5698 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:42.155343 kernel: audit: type=1103 audit(1769211342.148:807): pid=5705 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:42.536167 sshd[5705]: Connection closed by 20.161.92.111 port 58796 Jan 23 23:35:42.537204 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Jan 23 23:35:42.539000 audit[5698]: USER_END pid=5698 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:42.546260 systemd[1]: sshd@12-172.31.23.100:22-20.161.92.111:58796.service: Deactivated successfully. Jan 23 23:35:42.539000 audit[5698]: CRED_DISP pid=5698 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:42.552201 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 23:35:42.553446 kernel: audit: type=1106 audit(1769211342.539:808): pid=5698 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:42.556096 kernel: audit: type=1104 audit(1769211342.539:809): pid=5698 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:42.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.23.100:22-20.161.92.111:58796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:42.556356 systemd-logind[1934]: Session 14 logged out. Waiting for processes to exit. Jan 23 23:35:42.560344 systemd-logind[1934]: Removed session 14. Jan 23 23:35:43.607561 containerd[1972]: time="2026-01-23T23:35:43.607450175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:35:43.897873 containerd[1972]: time="2026-01-23T23:35:43.897503688Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:43.899942 containerd[1972]: time="2026-01-23T23:35:43.899843016Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:35:43.900269 containerd[1972]: time="2026-01-23T23:35:43.900070116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:43.900487 kubelet[3413]: E0123 23:35:43.900441 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:35:43.901903 kubelet[3413]: E0123 23:35:43.900861 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:35:43.901903 kubelet[3413]: E0123 23:35:43.901663 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b96f1a59a5d8446cad0dc01e07796dc4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-krc7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f9b44c667-h6s7s_calico-system(24e24657-8e54-4ae7-acdb-2eda45aabbdf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:43.905479 containerd[1972]: time="2026-01-23T23:35:43.905419452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:35:44.185063 containerd[1972]: time="2026-01-23T23:35:44.184995669Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:44.187328 containerd[1972]: time="2026-01-23T23:35:44.187260741Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:35:44.187556 containerd[1972]: time="2026-01-23T23:35:44.187297485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:44.187664 kubelet[3413]: E0123 23:35:44.187593 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:35:44.188565 kubelet[3413]: E0123 23:35:44.187660 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:35:44.188565 kubelet[3413]: E0123 23:35:44.187852 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-krc7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f9b44c667-h6s7s_calico-system(24e24657-8e54-4ae7-acdb-2eda45aabbdf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:44.189194 kubelet[3413]: E0123 23:35:44.189128 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f9b44c667-h6s7s" podUID="24e24657-8e54-4ae7-acdb-2eda45aabbdf" Jan 23 23:35:44.613985 containerd[1972]: time="2026-01-23T23:35:44.612703464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:35:44.909607 containerd[1972]: time="2026-01-23T23:35:44.908715709Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:44.912193 containerd[1972]: time="2026-01-23T23:35:44.912041545Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:35:44.912193 containerd[1972]: time="2026-01-23T23:35:44.912117661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:44.914795 kubelet[3413]: E0123 23:35:44.913133 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:35:44.914795 kubelet[3413]: E0123 23:35:44.913207 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:35:44.914795 kubelet[3413]: E0123 23:35:44.913391 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wlxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68b9c97bcf-96pwk_calico-apiserver(1b0b2743-62ac-460d-ba7c-52d229e3b875): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:44.915595 kubelet[3413]: E0123 23:35:44.915126 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b9c97bcf-96pwk" podUID="1b0b2743-62ac-460d-ba7c-52d229e3b875" Jan 23 23:35:46.619310 containerd[1972]: time="2026-01-23T23:35:46.619245458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:35:46.907613 containerd[1972]: time="2026-01-23T23:35:46.907282347Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:46.910186 containerd[1972]: time="2026-01-23T23:35:46.910037955Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:35:46.910186 containerd[1972]: time="2026-01-23T23:35:46.910102239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:46.910503 kubelet[3413]: E0123 23:35:46.910441 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:35:46.911884 kubelet[3413]: E0123 23:35:46.910527 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:35:46.911884 kubelet[3413]: E0123 23:35:46.910811 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pv8tg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68b9c97bcf-2xtt8_calico-apiserver(5be57303-da73-45f8-8222-a093d6ce8129): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:46.912776 kubelet[3413]: E0123 23:35:46.912090 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b9c97bcf-2xtt8" podUID="5be57303-da73-45f8-8222-a093d6ce8129" Jan 23 23:35:47.626449 systemd[1]: Started sshd@13-172.31.23.100:22-20.161.92.111:34654.service - OpenSSH per-connection server daemon (20.161.92.111:34654). Jan 23 23:35:47.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.23.100:22-20.161.92.111:34654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:47.629137 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 23:35:47.629208 kernel: audit: type=1130 audit(1769211347.625:811): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.23.100:22-20.161.92.111:34654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:48.115000 audit[5727]: USER_ACCT pid=5727 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:48.124873 sshd[5727]: Accepted publickey for core from 20.161.92.111 port 34654 ssh2: RSA SHA256:ypdhUqoTY06pBP3UvLlAm2LjmautYebA9jgvJDmvCzY Jan 23 23:35:48.125581 kernel: audit: type=1101 audit(1769211348.115:812): pid=5727 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:48.126000 audit[5727]: CRED_ACQ pid=5727 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:48.129792 sshd-session[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:35:48.138288 kernel: audit: type=1103 audit(1769211348.126:813): pid=5727 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:48.138405 kernel: audit: type=1006 audit(1769211348.126:814): pid=5727 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 23 23:35:48.126000 audit[5727]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc1a34c00 a2=3 a3=0 items=0 ppid=1 pid=5727 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:48.126000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:35:48.151374 kernel: audit: type=1300 audit(1769211348.126:814): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc1a34c00 a2=3 a3=0 items=0 ppid=1 pid=5727 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:48.151851 kernel: audit: type=1327 audit(1769211348.126:814): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:35:48.156801 systemd-logind[1934]: New session 15 of user core. Jan 23 23:35:48.165267 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 23:35:48.174000 audit[5727]: USER_START pid=5727 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:48.182000 audit[5731]: CRED_ACQ pid=5731 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:48.184063 kernel: audit: type=1105 audit(1769211348.174:815): pid=5727 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:48.190985 kernel: audit: type=1103 audit(1769211348.182:816): pid=5731 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:48.534138 sshd[5731]: Connection closed by 20.161.92.111 port 34654 Jan 23 23:35:48.536232 sshd-session[5727]: pam_unix(sshd:session): session closed for user core Jan 23 23:35:48.539000 audit[5727]: USER_END pid=5727 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:48.549667 systemd[1]: sshd@13-172.31.23.100:22-20.161.92.111:34654.service: Deactivated successfully. Jan 23 23:35:48.555887 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 23:35:48.539000 audit[5727]: CRED_DISP pid=5727 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:48.562770 kernel: audit: type=1106 audit(1769211348.539:817): pid=5727 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:48.562888 kernel: audit: type=1104 audit(1769211348.539:818): pid=5727 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:48.564169 systemd-logind[1934]: Session 15 logged out. Waiting for processes to exit. Jan 23 23:35:48.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.23.100:22-20.161.92.111:34654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:48.572307 systemd-logind[1934]: Removed session 15. Jan 23 23:35:49.613310 containerd[1972]: time="2026-01-23T23:35:49.612365344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:35:49.910517 containerd[1972]: time="2026-01-23T23:35:49.910197066Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:49.912496 containerd[1972]: time="2026-01-23T23:35:49.912397806Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:35:49.912664 containerd[1972]: time="2026-01-23T23:35:49.912417798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:49.914972 kubelet[3413]: E0123 23:35:49.913125 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:35:49.914972 kubelet[3413]: E0123 23:35:49.913191 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:35:49.914972 kubelet[3413]: E0123 23:35:49.913355 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqqlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-cvgjz_calico-system(bc435867-361d-4b3f-a3e1-96c440fc0a66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:49.920333 containerd[1972]: time="2026-01-23T23:35:49.920211762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:35:50.194415 containerd[1972]: time="2026-01-23T23:35:50.194340543Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:50.197157 containerd[1972]: time="2026-01-23T23:35:50.196739859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:50.197157 containerd[1972]: time="2026-01-23T23:35:50.197055147Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:35:50.197606 kubelet[3413]: E0123 23:35:50.197543 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:35:50.197701 kubelet[3413]: E0123 23:35:50.197613 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:35:50.197895 kubelet[3413]: E0123 23:35:50.197786 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqqlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-cvgjz_calico-system(bc435867-361d-4b3f-a3e1-96c440fc0a66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:50.199202 kubelet[3413]: E0123 23:35:50.199121 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cvgjz" podUID="bc435867-361d-4b3f-a3e1-96c440fc0a66" Jan 23 23:35:50.613063 containerd[1972]: time="2026-01-23T23:35:50.611647817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:35:50.869442 containerd[1972]: time="2026-01-23T23:35:50.868860163Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:50.871386 containerd[1972]: time="2026-01-23T23:35:50.871214107Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:35:50.871732 kubelet[3413]: E0123 23:35:50.871635 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:35:50.871853 containerd[1972]: time="2026-01-23T23:35:50.871261651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:50.872179 kubelet[3413]: E0123 23:35:50.871999 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:35:50.872952 kubelet[3413]: E0123 23:35:50.872556 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sb9zm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vqjnv_calico-system(0699514f-51e2-4aa1-86de-4ee590fe63e1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:50.873729 containerd[1972]: time="2026-01-23T23:35:50.873567451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:35:50.875704 kubelet[3413]: E0123 23:35:50.873977 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vqjnv" podUID="0699514f-51e2-4aa1-86de-4ee590fe63e1" Jan 23 23:35:51.141290 containerd[1972]: time="2026-01-23T23:35:51.140798152Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:51.144563 containerd[1972]: time="2026-01-23T23:35:51.144203620Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:35:51.144563 containerd[1972]: time="2026-01-23T23:35:51.144279556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:51.144795 kubelet[3413]: E0123 23:35:51.144583 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:35:51.144795 kubelet[3413]: E0123 23:35:51.144645 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:35:51.145370 kubelet[3413]: E0123 23:35:51.144999 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82mcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-778cf5d48d-s7h4h_calico-apiserver(64fe3344-4e80-444f-ba70-e34e02720a15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:51.146831 kubelet[3413]: E0123 23:35:51.146182 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-778cf5d48d-s7h4h" podUID="64fe3344-4e80-444f-ba70-e34e02720a15" Jan 23 23:35:51.146962 containerd[1972]: time="2026-01-23T23:35:51.145803928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:35:51.432246 containerd[1972]: time="2026-01-23T23:35:51.432162029Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:35:51.434621 containerd[1972]: time="2026-01-23T23:35:51.434504945Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:35:51.434801 containerd[1972]: time="2026-01-23T23:35:51.434572325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 23:35:51.435848 kubelet[3413]: E0123 23:35:51.435061 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:35:51.435848 kubelet[3413]: E0123 23:35:51.435149 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:35:51.435848 kubelet[3413]: E0123 23:35:51.435402 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4q2h7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-57b65b9-26wjv_calico-system(6c551026-ffac-43ea-999f-0823acd8fbb1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:35:51.436760 kubelet[3413]: E0123 23:35:51.436677 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57b65b9-26wjv" podUID="6c551026-ffac-43ea-999f-0823acd8fbb1" Jan 23 23:35:53.639941 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 23:35:53.640149 kernel: audit: type=1130 audit(1769211353.637:820): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.23.100:22-20.161.92.111:58430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:53.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.23.100:22-20.161.92.111:58430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:53.637384 systemd[1]: Started sshd@14-172.31.23.100:22-20.161.92.111:58430.service - OpenSSH per-connection server daemon (20.161.92.111:58430). Jan 23 23:35:54.150000 audit[5745]: USER_ACCT pid=5745 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:54.158068 sshd[5745]: Accepted publickey for core from 20.161.92.111 port 58430 ssh2: RSA SHA256:ypdhUqoTY06pBP3UvLlAm2LjmautYebA9jgvJDmvCzY Jan 23 23:35:54.159397 kernel: audit: type=1101 audit(1769211354.150:821): pid=5745 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:54.159000 audit[5745]: CRED_ACQ pid=5745 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:54.168700 sshd-session[5745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:35:54.170950 kernel: audit: type=1103 audit(1769211354.159:822): pid=5745 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:54.171068 kernel: audit: type=1006 audit(1769211354.166:823): pid=5745 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 23 23:35:54.166000 audit[5745]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffffe4d600 a2=3 a3=0 items=0 ppid=1 pid=5745 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:54.182578 kernel: audit: type=1300 audit(1769211354.166:823): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffffe4d600 a2=3 a3=0 items=0 ppid=1 pid=5745 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:54.166000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:35:54.185879 kernel: audit: type=1327 audit(1769211354.166:823): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:35:54.194984 systemd-logind[1934]: New session 16 of user core. Jan 23 23:35:54.205678 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 23:35:54.214000 audit[5745]: USER_START pid=5745 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:54.225724 kernel: audit: type=1105 audit(1769211354.214:824): pid=5745 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:54.225844 kernel: audit: type=1103 audit(1769211354.224:825): pid=5749 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:54.224000 audit[5749]: CRED_ACQ pid=5749 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:54.615717 sshd[5749]: Connection closed by 20.161.92.111 port 58430 Jan 23 23:35:54.616857 sshd-session[5745]: pam_unix(sshd:session): session closed for user core Jan 23 23:35:54.620000 audit[5745]: USER_END pid=5745 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:54.629714 systemd[1]: sshd@14-172.31.23.100:22-20.161.92.111:58430.service: Deactivated successfully. Jan 23 23:35:54.636192 kernel: audit: type=1106 audit(1769211354.620:826): pid=5745 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:54.636328 kernel: audit: type=1104 audit(1769211354.622:827): pid=5745 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:54.622000 audit[5745]: CRED_DISP pid=5745 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:54.642544 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 23:35:54.649251 systemd-logind[1934]: Session 16 logged out. Waiting for processes to exit. Jan 23 23:35:54.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.23.100:22-20.161.92.111:58430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:54.653870 systemd-logind[1934]: Removed session 16. Jan 23 23:35:54.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.23.100:22-20.161.92.111:58440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:54.714900 systemd[1]: Started sshd@15-172.31.23.100:22-20.161.92.111:58440.service - OpenSSH per-connection server daemon (20.161.92.111:58440). Jan 23 23:35:55.193000 audit[5762]: USER_ACCT pid=5762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:55.195284 sshd[5762]: Accepted publickey for core from 20.161.92.111 port 58440 ssh2: RSA SHA256:ypdhUqoTY06pBP3UvLlAm2LjmautYebA9jgvJDmvCzY Jan 23 23:35:55.198000 audit[5762]: CRED_ACQ pid=5762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:55.198000 audit[5762]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe81f8230 a2=3 a3=0 items=0 ppid=1 pid=5762 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:55.198000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:35:55.201681 sshd-session[5762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:35:55.213182 systemd-logind[1934]: New session 17 of user core. Jan 23 23:35:55.221305 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 23:35:55.229000 audit[5762]: USER_START pid=5762 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:55.234000 audit[5766]: CRED_ACQ pid=5766 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:55.924081 sshd[5766]: Connection closed by 20.161.92.111 port 58440 Jan 23 23:35:55.924447 sshd-session[5762]: pam_unix(sshd:session): session closed for user core Jan 23 23:35:55.933000 audit[5762]: USER_END pid=5762 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:55.933000 audit[5762]: CRED_DISP pid=5762 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:55.942259 systemd[1]: sshd@15-172.31.23.100:22-20.161.92.111:58440.service: Deactivated successfully. Jan 23 23:35:55.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.23.100:22-20.161.92.111:58440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:55.949329 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 23:35:55.954264 systemd-logind[1934]: Session 17 logged out. Waiting for processes to exit. Jan 23 23:35:55.961697 systemd-logind[1934]: Removed session 17. Jan 23 23:35:56.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.23.100:22-20.161.92.111:58444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:56.031014 systemd[1]: Started sshd@16-172.31.23.100:22-20.161.92.111:58444.service - OpenSSH per-connection server daemon (20.161.92.111:58444). Jan 23 23:35:56.554000 audit[5776]: USER_ACCT pid=5776 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:56.555868 sshd[5776]: Accepted publickey for core from 20.161.92.111 port 58444 ssh2: RSA SHA256:ypdhUqoTY06pBP3UvLlAm2LjmautYebA9jgvJDmvCzY Jan 23 23:35:56.556000 audit[5776]: CRED_ACQ pid=5776 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:56.556000 audit[5776]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffef8f4e30 a2=3 a3=0 items=0 ppid=1 pid=5776 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:56.556000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:35:56.559411 sshd-session[5776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:35:56.572512 systemd-logind[1934]: New session 18 of user core. Jan 23 23:35:56.578654 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 23:35:56.589000 audit[5776]: USER_START pid=5776 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:56.595000 audit[5780]: CRED_ACQ pid=5780 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:56.611999 kubelet[3413]: E0123 23:35:56.611645 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f9b44c667-h6s7s" podUID="24e24657-8e54-4ae7-acdb-2eda45aabbdf" Jan 23 23:35:57.965000 audit[5801]: NETFILTER_CFG table=filter:147 family=2 entries=14 op=nft_register_rule pid=5801 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:35:57.965000 audit[5801]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff9cd5df0 a2=0 a3=1 items=0 ppid=3525 pid=5801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:57.965000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:35:57.975000 audit[5801]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=5801 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:35:57.975000 audit[5801]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff9cd5df0 a2=0 a3=1 items=0 ppid=3525 pid=5801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:57.975000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:35:57.983463 sshd[5780]: Connection closed by 20.161.92.111 port 58444 Jan 23 23:35:57.983855 sshd-session[5776]: pam_unix(sshd:session): session closed for user core Jan 23 23:35:57.987000 audit[5776]: USER_END pid=5776 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:57.987000 audit[5776]: CRED_DISP pid=5776 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:57.994600 systemd[1]: sshd@16-172.31.23.100:22-20.161.92.111:58444.service: Deactivated successfully. Jan 23 23:35:57.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.23.100:22-20.161.92.111:58444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:58.004242 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 23:35:58.008889 systemd-logind[1934]: Session 18 logged out. Waiting for processes to exit. Jan 23 23:35:58.014039 systemd-logind[1934]: Removed session 18. Jan 23 23:35:58.033000 audit[5804]: NETFILTER_CFG table=filter:149 family=2 entries=26 op=nft_register_rule pid=5804 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:35:58.033000 audit[5804]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffc1c38b50 a2=0 a3=1 items=0 ppid=3525 pid=5804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:58.033000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:35:58.042000 audit[5804]: NETFILTER_CFG table=nat:150 family=2 entries=20 op=nft_register_rule pid=5804 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:35:58.042000 audit[5804]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc1c38b50 a2=0 a3=1 items=0 ppid=3525 pid=5804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:58.042000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:35:58.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.23.100:22-20.161.92.111:58458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:58.070494 systemd[1]: Started sshd@17-172.31.23.100:22-20.161.92.111:58458.service - OpenSSH per-connection server daemon (20.161.92.111:58458). Jan 23 23:35:58.586621 sshd[5808]: Accepted publickey for core from 20.161.92.111 port 58458 ssh2: RSA SHA256:ypdhUqoTY06pBP3UvLlAm2LjmautYebA9jgvJDmvCzY Jan 23 23:35:58.585000 audit[5808]: USER_ACCT pid=5808 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:58.589000 audit[5808]: CRED_ACQ pid=5808 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:58.589000 audit[5808]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe6ad4500 a2=3 a3=0 items=0 ppid=1 pid=5808 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:58.589000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:35:58.592855 sshd-session[5808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:35:58.610394 systemd-logind[1934]: New session 19 of user core. Jan 23 23:35:58.618034 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 23:35:58.625000 audit[5808]: USER_START pid=5808 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:58.630000 audit[5812]: CRED_ACQ pid=5812 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:59.314320 sshd[5812]: Connection closed by 20.161.92.111 port 58458 Jan 23 23:35:59.314797 sshd-session[5808]: pam_unix(sshd:session): session closed for user core Jan 23 23:35:59.319061 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 23 23:35:59.319193 kernel: audit: type=1106 audit(1769211359.316:857): pid=5808 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:59.316000 audit[5808]: USER_END pid=5808 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:59.316000 audit[5808]: CRED_DISP pid=5808 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:59.333797 kernel: audit: type=1104 audit(1769211359.316:858): pid=5808 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:59.335070 systemd[1]: sshd@17-172.31.23.100:22-20.161.92.111:58458.service: Deactivated successfully. Jan 23 23:35:59.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.23.100:22-20.161.92.111:58458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:59.342383 kernel: audit: type=1131 audit(1769211359.334:859): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.23.100:22-20.161.92.111:58458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:59.345215 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 23:35:59.347459 systemd-logind[1934]: Session 19 logged out. Waiting for processes to exit. Jan 23 23:35:59.355990 systemd-logind[1934]: Removed session 19. Jan 23 23:35:59.410157 systemd[1]: Started sshd@18-172.31.23.100:22-20.161.92.111:58474.service - OpenSSH per-connection server daemon (20.161.92.111:58474). Jan 23 23:35:59.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.23.100:22-20.161.92.111:58474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:59.420971 kernel: audit: type=1130 audit(1769211359.409:860): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.23.100:22-20.161.92.111:58474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:35:59.610083 kubelet[3413]: E0123 23:35:59.609653 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b9c97bcf-96pwk" podUID="1b0b2743-62ac-460d-ba7c-52d229e3b875" Jan 23 23:35:59.610083 kubelet[3413]: E0123 23:35:59.609679 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b9c97bcf-2xtt8" podUID="5be57303-da73-45f8-8222-a093d6ce8129" Jan 23 23:35:59.883617 sshd[5822]: Accepted publickey for core from 20.161.92.111 port 58474 ssh2: RSA SHA256:ypdhUqoTY06pBP3UvLlAm2LjmautYebA9jgvJDmvCzY Jan 23 23:35:59.882000 audit[5822]: USER_ACCT pid=5822 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:59.894077 sshd-session[5822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:35:59.895591 kernel: audit: type=1101 audit(1769211359.882:861): pid=5822 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:59.890000 audit[5822]: CRED_ACQ pid=5822 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:59.910117 kernel: audit: type=1103 audit(1769211359.890:862): pid=5822 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:59.910248 kernel: audit: type=1006 audit(1769211359.891:863): pid=5822 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 23 23:35:59.891000 audit[5822]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff17c5420 a2=3 a3=0 items=0 ppid=1 pid=5822 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:59.919870 kernel: audit: type=1300 audit(1769211359.891:863): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff17c5420 a2=3 a3=0 items=0 ppid=1 pid=5822 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:35:59.891000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:35:59.927211 kernel: audit: type=1327 audit(1769211359.891:863): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:35:59.928984 systemd-logind[1934]: New session 20 of user core. Jan 23 23:35:59.937673 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 23:35:59.946000 audit[5822]: USER_START pid=5822 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:59.955961 kernel: audit: type=1105 audit(1769211359.946:864): pid=5822 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:35:59.959000 audit[5826]: CRED_ACQ pid=5826 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:00.276773 sshd[5826]: Connection closed by 20.161.92.111 port 58474 Jan 23 23:36:00.277895 sshd-session[5822]: pam_unix(sshd:session): session closed for user core Jan 23 23:36:00.280000 audit[5822]: USER_END pid=5822 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:00.281000 audit[5822]: CRED_DISP pid=5822 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:00.289216 systemd[1]: sshd@18-172.31.23.100:22-20.161.92.111:58474.service: Deactivated successfully. Jan 23 23:36:00.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.23.100:22-20.161.92.111:58474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:36:00.296830 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 23:36:00.302131 systemd-logind[1934]: Session 20 logged out. Waiting for processes to exit. Jan 23 23:36:00.308033 systemd-logind[1934]: Removed session 20. Jan 23 23:36:02.634155 kubelet[3413]: E0123 23:36:02.633435 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vqjnv" podUID="0699514f-51e2-4aa1-86de-4ee590fe63e1" Jan 23 23:36:02.636499 kubelet[3413]: E0123 23:36:02.635943 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cvgjz" podUID="bc435867-361d-4b3f-a3e1-96c440fc0a66" Jan 23 23:36:03.609593 kubelet[3413]: E0123 23:36:03.609519 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57b65b9-26wjv" podUID="6c551026-ffac-43ea-999f-0823acd8fbb1" Jan 23 23:36:04.519000 audit[5861]: NETFILTER_CFG table=filter:151 family=2 entries=26 op=nft_register_rule pid=5861 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:36:04.522269 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 23 23:36:04.522416 kernel: audit: type=1325 audit(1769211364.519:869): table=filter:151 family=2 entries=26 op=nft_register_rule pid=5861 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:36:04.519000 audit[5861]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffee75b980 a2=0 a3=1 items=0 ppid=3525 pid=5861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:36:04.533471 kernel: audit: type=1300 audit(1769211364.519:869): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffee75b980 a2=0 a3=1 items=0 ppid=3525 pid=5861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:36:04.519000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:36:04.537524 kernel: audit: type=1327 audit(1769211364.519:869): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:36:04.541000 audit[5861]: NETFILTER_CFG table=nat:152 family=2 entries=104 op=nft_register_chain pid=5861 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:36:04.541000 audit[5861]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffee75b980 a2=0 a3=1 items=0 ppid=3525 pid=5861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:36:04.555701 kernel: audit: type=1325 audit(1769211364.541:870): table=nat:152 family=2 entries=104 op=nft_register_chain pid=5861 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 23:36:04.555840 kernel: audit: type=1300 audit(1769211364.541:870): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffee75b980 a2=0 a3=1 items=0 ppid=3525 pid=5861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:36:04.541000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:36:04.562468 kernel: audit: type=1327 audit(1769211364.541:870): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 23:36:05.372140 systemd[1]: Started sshd@19-172.31.23.100:22-20.161.92.111:53952.service - OpenSSH per-connection server daemon (20.161.92.111:53952). Jan 23 23:36:05.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.23.100:22-20.161.92.111:53952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:36:05.383974 kernel: audit: type=1130 audit(1769211365.371:871): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.23.100:22-20.161.92.111:53952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:36:05.608961 kubelet[3413]: E0123 23:36:05.608655 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-778cf5d48d-s7h4h" podUID="64fe3344-4e80-444f-ba70-e34e02720a15" Jan 23 23:36:05.877000 audit[5863]: USER_ACCT pid=5863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:05.885210 sshd[5863]: Accepted publickey for core from 20.161.92.111 port 53952 ssh2: RSA SHA256:ypdhUqoTY06pBP3UvLlAm2LjmautYebA9jgvJDmvCzY Jan 23 23:36:05.886957 kernel: audit: type=1101 audit(1769211365.877:872): pid=5863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:05.886000 audit[5863]: CRED_ACQ pid=5863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:05.891336 sshd-session[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:36:05.901709 kernel: audit: type=1103 audit(1769211365.886:873): pid=5863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:05.901861 kernel: audit: type=1006 audit(1769211365.886:874): pid=5863 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 23 23:36:05.886000 audit[5863]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff29dc620 a2=3 a3=0 items=0 ppid=1 pid=5863 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:36:05.886000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:36:05.909362 systemd-logind[1934]: New session 21 of user core. Jan 23 23:36:05.919293 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 23:36:05.929000 audit[5863]: USER_START pid=5863 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:05.933000 audit[5867]: CRED_ACQ pid=5867 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:06.277237 sshd[5867]: Connection closed by 20.161.92.111 port 53952 Jan 23 23:36:06.277600 sshd-session[5863]: pam_unix(sshd:session): session closed for user core Jan 23 23:36:06.280000 audit[5863]: USER_END pid=5863 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:06.281000 audit[5863]: CRED_DISP pid=5863 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:06.288582 systemd[1]: sshd@19-172.31.23.100:22-20.161.92.111:53952.service: Deactivated successfully. Jan 23 23:36:06.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.23.100:22-20.161.92.111:53952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:36:06.296384 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 23:36:06.307731 systemd-logind[1934]: Session 21 logged out. Waiting for processes to exit. Jan 23 23:36:06.310970 systemd-logind[1934]: Removed session 21. Jan 23 23:36:09.609735 kubelet[3413]: E0123 23:36:09.609629 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f9b44c667-h6s7s" podUID="24e24657-8e54-4ae7-acdb-2eda45aabbdf" Jan 23 23:36:10.609123 kubelet[3413]: E0123 23:36:10.609045 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b9c97bcf-2xtt8" podUID="5be57303-da73-45f8-8222-a093d6ce8129" Jan 23 23:36:11.389771 systemd[1]: Started sshd@20-172.31.23.100:22-20.161.92.111:53964.service - OpenSSH per-connection server daemon (20.161.92.111:53964). Jan 23 23:36:11.397516 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 23 23:36:11.397577 kernel: audit: type=1130 audit(1769211371.389:880): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.23.100:22-20.161.92.111:53964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:36:11.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.23.100:22-20.161.92.111:53964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:36:11.608709 kubelet[3413]: E0123 23:36:11.608447 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b9c97bcf-96pwk" podUID="1b0b2743-62ac-460d-ba7c-52d229e3b875" Jan 23 23:36:11.899000 audit[5883]: USER_ACCT pid=5883 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:11.906448 sshd[5883]: Accepted publickey for core from 20.161.92.111 port 53964 ssh2: RSA SHA256:ypdhUqoTY06pBP3UvLlAm2LjmautYebA9jgvJDmvCzY Jan 23 23:36:11.910000 audit[5883]: CRED_ACQ pid=5883 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:11.918719 kernel: audit: type=1101 audit(1769211371.899:881): pid=5883 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:11.918841 kernel: audit: type=1103 audit(1769211371.910:882): pid=5883 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:11.919237 sshd-session[5883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:36:11.925677 kernel: audit: type=1006 audit(1769211371.910:883): pid=5883 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 23 23:36:11.910000 audit[5883]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc8bb1630 a2=3 a3=0 items=0 ppid=1 pid=5883 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:36:11.935169 kernel: audit: type=1300 audit(1769211371.910:883): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc8bb1630 a2=3 a3=0 items=0 ppid=1 pid=5883 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:36:11.910000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:36:11.944796 kernel: audit: type=1327 audit(1769211371.910:883): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:36:11.954014 systemd-logind[1934]: New session 22 of user core. Jan 23 23:36:11.976422 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 23:36:11.984000 audit[5883]: USER_START pid=5883 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:11.994000 audit[5887]: CRED_ACQ pid=5887 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:12.000752 kernel: audit: type=1105 audit(1769211371.984:884): pid=5883 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:12.000833 kernel: audit: type=1103 audit(1769211371.994:885): pid=5887 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:12.331440 sshd[5887]: Connection closed by 20.161.92.111 port 53964 Jan 23 23:36:12.330315 sshd-session[5883]: pam_unix(sshd:session): session closed for user core Jan 23 23:36:12.332000 audit[5883]: USER_END pid=5883 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:12.343788 systemd[1]: sshd@20-172.31.23.100:22-20.161.92.111:53964.service: Deactivated successfully. Jan 23 23:36:12.332000 audit[5883]: CRED_DISP pid=5883 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:12.350004 kernel: audit: type=1106 audit(1769211372.332:886): pid=5883 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:12.351245 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 23:36:12.359272 systemd-logind[1934]: Session 22 logged out. Waiting for processes to exit. Jan 23 23:36:12.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.23.100:22-20.161.92.111:53964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:36:12.361964 kernel: audit: type=1104 audit(1769211372.332:887): pid=5883 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:12.364109 systemd-logind[1934]: Removed session 22. Jan 23 23:36:14.611026 kubelet[3413]: E0123 23:36:14.610667 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cvgjz" podUID="bc435867-361d-4b3f-a3e1-96c440fc0a66" Jan 23 23:36:16.613928 kubelet[3413]: E0123 23:36:16.613814 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57b65b9-26wjv" podUID="6c551026-ffac-43ea-999f-0823acd8fbb1" Jan 23 23:36:16.617140 kubelet[3413]: E0123 23:36:16.616353 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vqjnv" podUID="0699514f-51e2-4aa1-86de-4ee590fe63e1" Jan 23 23:36:17.438304 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 23:36:17.438440 kernel: audit: type=1130 audit(1769211377.435:889): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.23.100:22-20.161.92.111:52746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:36:17.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.23.100:22-20.161.92.111:52746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:36:17.436429 systemd[1]: Started sshd@21-172.31.23.100:22-20.161.92.111:52746.service - OpenSSH per-connection server daemon (20.161.92.111:52746). Jan 23 23:36:17.944000 audit[5901]: USER_ACCT pid=5901 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:17.952158 sshd[5901]: Accepted publickey for core from 20.161.92.111 port 52746 ssh2: RSA SHA256:ypdhUqoTY06pBP3UvLlAm2LjmautYebA9jgvJDmvCzY Jan 23 23:36:17.952000 audit[5901]: CRED_ACQ pid=5901 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:17.960938 kernel: audit: type=1101 audit(1769211377.944:890): pid=5901 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:17.961074 kernel: audit: type=1103 audit(1769211377.952:891): pid=5901 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:17.963864 sshd-session[5901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:36:17.970727 kernel: audit: type=1006 audit(1769211377.953:892): pid=5901 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 23 23:36:17.971413 kernel: audit: type=1300 audit(1769211377.953:892): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc95014f0 a2=3 a3=0 items=0 ppid=1 pid=5901 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:36:17.953000 audit[5901]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc95014f0 a2=3 a3=0 items=0 ppid=1 pid=5901 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:36:17.953000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:36:17.980028 kernel: audit: type=1327 audit(1769211377.953:892): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:36:17.989518 systemd-logind[1934]: New session 23 of user core. Jan 23 23:36:18.001269 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 23:36:18.010000 audit[5901]: USER_START pid=5901 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:18.023963 kernel: audit: type=1105 audit(1769211378.010:893): pid=5901 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:18.023000 audit[5905]: CRED_ACQ pid=5905 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:18.032976 kernel: audit: type=1103 audit(1769211378.023:894): pid=5905 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:18.396788 sshd[5905]: Connection closed by 20.161.92.111 port 52746 Jan 23 23:36:18.396580 sshd-session[5901]: pam_unix(sshd:session): session closed for user core Jan 23 23:36:18.398000 audit[5901]: USER_END pid=5901 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:18.406000 audit[5901]: CRED_DISP pid=5901 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:18.413222 systemd[1]: sshd@21-172.31.23.100:22-20.161.92.111:52746.service: Deactivated successfully. Jan 23 23:36:18.417649 kernel: audit: type=1106 audit(1769211378.398:895): pid=5901 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:18.417782 kernel: audit: type=1104 audit(1769211378.406:896): pid=5901 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:18.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.23.100:22-20.161.92.111:52746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:36:18.422630 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 23:36:18.428598 systemd-logind[1934]: Session 23 logged out. Waiting for processes to exit. Jan 23 23:36:18.434334 systemd-logind[1934]: Removed session 23. Jan 23 23:36:19.609152 kubelet[3413]: E0123 23:36:19.608274 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-778cf5d48d-s7h4h" podUID="64fe3344-4e80-444f-ba70-e34e02720a15" Jan 23 23:36:22.610017 kubelet[3413]: E0123 23:36:22.608619 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b9c97bcf-2xtt8" podUID="5be57303-da73-45f8-8222-a093d6ce8129" Jan 23 23:36:23.498775 systemd[1]: Started sshd@22-172.31.23.100:22-20.161.92.111:48794.service - OpenSSH per-connection server daemon (20.161.92.111:48794). Jan 23 23:36:23.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.23.100:22-20.161.92.111:48794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:36:23.501563 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 23:36:23.502006 kernel: audit: type=1130 audit(1769211383.498:898): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.23.100:22-20.161.92.111:48794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:36:23.608573 kubelet[3413]: E0123 23:36:23.608480 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b9c97bcf-96pwk" podUID="1b0b2743-62ac-460d-ba7c-52d229e3b875" Jan 23 23:36:24.011000 audit[5917]: USER_ACCT pid=5917 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:24.019865 sshd[5917]: Accepted publickey for core from 20.161.92.111 port 48794 ssh2: RSA SHA256:ypdhUqoTY06pBP3UvLlAm2LjmautYebA9jgvJDmvCzY Jan 23 23:36:24.026779 kernel: audit: type=1101 audit(1769211384.011:899): pid=5917 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:24.026934 kernel: audit: type=1103 audit(1769211384.019:900): pid=5917 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:24.019000 audit[5917]: CRED_ACQ pid=5917 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:24.022478 sshd-session[5917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:36:24.036272 kernel: audit: type=1006 audit(1769211384.020:901): pid=5917 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 23 23:36:24.020000 audit[5917]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffe277900 a2=3 a3=0 items=0 ppid=1 pid=5917 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:36:24.043850 kernel: audit: type=1300 audit(1769211384.020:901): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffe277900 a2=3 a3=0 items=0 ppid=1 pid=5917 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:36:24.020000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:36:24.048869 kernel: audit: type=1327 audit(1769211384.020:901): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:36:24.049836 systemd-logind[1934]: New session 24 of user core. Jan 23 23:36:24.057264 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 23:36:24.067000 audit[5917]: USER_START pid=5917 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:24.071000 audit[5921]: CRED_ACQ pid=5921 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:24.078086 kernel: audit: type=1105 audit(1769211384.067:902): pid=5917 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:24.083949 kernel: audit: type=1103 audit(1769211384.071:903): pid=5921 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:24.427268 sshd[5921]: Connection closed by 20.161.92.111 port 48794 Jan 23 23:36:24.428755 sshd-session[5917]: pam_unix(sshd:session): session closed for user core Jan 23 23:36:24.432000 audit[5917]: USER_END pid=5917 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:24.441409 systemd[1]: sshd@22-172.31.23.100:22-20.161.92.111:48794.service: Deactivated successfully. Jan 23 23:36:24.432000 audit[5917]: CRED_DISP pid=5917 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:24.452128 kernel: audit: type=1106 audit(1769211384.432:904): pid=5917 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:24.452236 kernel: audit: type=1104 audit(1769211384.432:905): pid=5917 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:24.446791 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 23:36:24.453553 systemd-logind[1934]: Session 24 logged out. Waiting for processes to exit. Jan 23 23:36:24.459022 systemd-logind[1934]: Removed session 24. Jan 23 23:36:24.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.23.100:22-20.161.92.111:48794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:36:24.610781 containerd[1972]: time="2026-01-23T23:36:24.610690754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:36:24.925844 containerd[1972]: time="2026-01-23T23:36:24.925556464Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:36:24.927937 containerd[1972]: time="2026-01-23T23:36:24.927810040Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:36:24.928198 containerd[1972]: time="2026-01-23T23:36:24.928117528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 23:36:24.928773 kubelet[3413]: E0123 23:36:24.928701 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:36:24.929412 kubelet[3413]: E0123 23:36:24.928839 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:36:24.931210 kubelet[3413]: E0123 23:36:24.931098 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b96f1a59a5d8446cad0dc01e07796dc4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-krc7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f9b44c667-h6s7s_calico-system(24e24657-8e54-4ae7-acdb-2eda45aabbdf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:36:24.934272 containerd[1972]: time="2026-01-23T23:36:24.934143640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:36:25.200429 containerd[1972]: time="2026-01-23T23:36:25.200315833Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:36:25.203125 containerd[1972]: time="2026-01-23T23:36:25.202901077Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:36:25.203125 containerd[1972]: time="2026-01-23T23:36:25.203027677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 23:36:25.203665 kubelet[3413]: E0123 23:36:25.203619 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:36:25.203838 kubelet[3413]: E0123 23:36:25.203809 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:36:25.204993 kubelet[3413]: E0123 23:36:25.204849 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-krc7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f9b44c667-h6s7s_calico-system(24e24657-8e54-4ae7-acdb-2eda45aabbdf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:36:25.206566 kubelet[3413]: E0123 23:36:25.206468 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f9b44c667-h6s7s" podUID="24e24657-8e54-4ae7-acdb-2eda45aabbdf" Jan 23 23:36:29.530974 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 23:36:29.531112 kernel: audit: type=1130 audit(1769211389.523:907): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.23.100:22-20.161.92.111:48808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:36:29.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.23.100:22-20.161.92.111:48808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:36:29.523946 systemd[1]: Started sshd@23-172.31.23.100:22-20.161.92.111:48808.service - OpenSSH per-connection server daemon (20.161.92.111:48808). Jan 23 23:36:29.610618 kubelet[3413]: E0123 23:36:29.610488 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vqjnv" podUID="0699514f-51e2-4aa1-86de-4ee590fe63e1" Jan 23 23:36:29.611787 kubelet[3413]: E0123 23:36:29.611232 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cvgjz" podUID="bc435867-361d-4b3f-a3e1-96c440fc0a66" Jan 23 23:36:30.011000 audit[5940]: USER_ACCT pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:30.012959 sshd[5940]: Accepted publickey for core from 20.161.92.111 port 48808 ssh2: RSA SHA256:ypdhUqoTY06pBP3UvLlAm2LjmautYebA9jgvJDmvCzY Jan 23 23:36:30.021423 sshd-session[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:36:30.018000 audit[5940]: CRED_ACQ pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:30.029712 kernel: audit: type=1101 audit(1769211390.011:908): pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:30.030143 kernel: audit: type=1103 audit(1769211390.018:909): pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:30.034087 kernel: audit: type=1006 audit(1769211390.018:910): pid=5940 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 23 23:36:30.018000 audit[5940]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd83cbf50 a2=3 a3=0 items=0 ppid=1 pid=5940 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:36:30.041072 kernel: audit: type=1300 audit(1769211390.018:910): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd83cbf50 a2=3 a3=0 items=0 ppid=1 pid=5940 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 23:36:30.018000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:36:30.044063 kernel: audit: type=1327 audit(1769211390.018:910): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 23:36:30.052131 systemd-logind[1934]: New session 25 of user core. Jan 23 23:36:30.061537 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 23:36:30.069000 audit[5940]: USER_START pid=5940 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:30.072000 audit[5944]: CRED_ACQ pid=5944 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:30.084713 kernel: audit: type=1105 audit(1769211390.069:911): pid=5940 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:30.084867 kernel: audit: type=1103 audit(1769211390.072:912): pid=5944 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:30.417644 sshd[5944]: Connection closed by 20.161.92.111 port 48808 Jan 23 23:36:30.418934 sshd-session[5940]: pam_unix(sshd:session): session closed for user core Jan 23 23:36:30.426000 audit[5940]: USER_END pid=5940 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:30.433010 systemd[1]: sshd@23-172.31.23.100:22-20.161.92.111:48808.service: Deactivated successfully. Jan 23 23:36:30.426000 audit[5940]: CRED_DISP pid=5940 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:30.438994 kernel: audit: type=1106 audit(1769211390.426:913): pid=5940 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:30.442850 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 23:36:30.448022 systemd-logind[1934]: Session 25 logged out. Waiting for processes to exit. Jan 23 23:36:30.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.23.100:22-20.161.92.111:48808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 23:36:30.449062 kernel: audit: type=1104 audit(1769211390.426:914): pid=5940 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 23 23:36:30.454375 systemd-logind[1934]: Removed session 25. Jan 23 23:36:31.609602 containerd[1972]: time="2026-01-23T23:36:31.609545649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:36:31.853085 containerd[1972]: time="2026-01-23T23:36:31.853010602Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 23:36:31.855425 containerd[1972]: time="2026-01-23T23:36:31.855343702Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:36:31.855567 containerd[1972]: time="2026-01-23T23:36:31.855374830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 23:36:31.856582 kubelet[3413]: E0123 23:36:31.856172 3413 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:36:31.856582 kubelet[3413]: E0123 23:36:31.856239 3413 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:36:31.856582 kubelet[3413]: E0123 23:36:31.856463 3413 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4q2h7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-57b65b9-26wjv_calico-system(6c551026-ffac-43ea-999f-0823acd8fbb1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:36:31.858881 kubelet[3413]: E0123 23:36:31.858025 3413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57b65b9-26wjv" podUID="6c551026-ffac-43ea-999f-0823acd8fbb1"