Jan 28 01:17:30.080924 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 22:27:36 -00 2026 Jan 28 01:17:30.080967 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=230bd1064bb7a0684ba668d5ea4a2f2b71590a34541eb9a6374c5e871e67bba4 Jan 28 01:17:30.080982 kernel: BIOS-provided physical RAM map: Jan 28 01:17:30.080997 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 28 01:17:30.081008 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 28 01:17:30.081017 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 28 01:17:30.081026 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 28 01:17:30.081038 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 28 01:17:30.081048 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 28 01:17:30.081060 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 28 01:17:30.081069 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 28 01:17:30.081083 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 28 01:17:30.081094 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 28 01:17:30.083182 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 28 01:17:30.083203 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 28 01:17:30.083214 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 28 01:17:30.083233 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 28 01:17:30.083242 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 28 01:17:30.083251 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 28 01:17:30.083260 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 28 01:17:30.083269 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 28 01:17:30.083279 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 28 01:17:30.083290 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 28 01:17:30.083301 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 28 01:17:30.083311 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 28 01:17:30.083320 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 28 01:17:30.083333 kernel: NX (Execute Disable) protection: active Jan 28 01:17:30.083343 kernel: APIC: Static calls initialized Jan 28 01:17:30.083355 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 28 01:17:30.083365 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 28 01:17:30.083375 kernel: extended physical RAM map: Jan 28 01:17:30.083384 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 28 01:17:30.083393 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 28 01:17:30.083404 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 28 01:17:30.083415 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 28 01:17:30.083427 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 28 01:17:30.083438 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 28 01:17:30.083453 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 28 01:17:30.083464 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 28 01:17:30.083474 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 28 01:17:30.083490 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 28 01:17:30.083506 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 28 01:17:30.083517 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 28 01:17:30.091290 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 28 01:17:30.091330 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 28 01:17:30.091342 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 28 01:17:30.091352 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 28 01:17:30.091362 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 28 01:17:30.091372 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 28 01:17:30.091383 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 28 01:17:30.091403 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 28 01:17:30.091414 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 28 01:17:30.091425 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 28 01:17:30.091435 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 28 01:17:30.091446 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 28 01:17:30.091457 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 28 01:17:30.091467 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 28 01:17:30.091477 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 28 01:17:30.091488 kernel: efi: EFI v2.7 by EDK II Jan 28 01:17:30.091498 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 28 01:17:30.091508 kernel: random: crng init done Jan 28 01:17:30.091523 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 28 01:17:30.091658 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 28 01:17:30.091672 kernel: secureboot: Secure boot disabled Jan 28 01:17:30.091683 kernel: SMBIOS 2.8 present. Jan 28 01:17:30.091693 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 28 01:17:30.091704 kernel: DMI: Memory slots populated: 1/1 Jan 28 01:17:30.091714 kernel: Hypervisor detected: KVM Jan 28 01:17:30.091724 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 28 01:17:30.091735 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 28 01:17:30.091746 kernel: kvm-clock: using sched offset of 18843044390 cycles Jan 28 01:17:30.091757 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 01:17:30.091774 kernel: tsc: Detected 2445.426 MHz processor Jan 28 01:17:30.091785 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 28 01:17:30.091797 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 28 01:17:30.091808 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 28 01:17:30.091819 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 28 01:17:30.091830 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 28 01:17:30.091841 kernel: Using GB pages for direct mapping Jan 28 01:17:30.091855 kernel: ACPI: Early table checksum verification disabled Jan 28 01:17:30.091867 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 28 01:17:30.091878 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 28 01:17:30.091889 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:17:30.091900 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:17:30.091911 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 28 01:17:30.091922 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:17:30.091936 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:17:30.091947 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:17:30.091959 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:17:30.091970 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 28 01:17:30.091981 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 28 01:17:30.091993 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 28 01:17:30.092006 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 28 01:17:30.092021 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 28 01:17:30.092032 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 28 01:17:30.092042 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 28 01:17:30.092052 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 28 01:17:30.092062 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 28 01:17:30.092074 kernel: No NUMA configuration found Jan 28 01:17:30.092086 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 28 01:17:30.092101 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 28 01:17:30.094487 kernel: Zone ranges: Jan 28 01:17:30.094502 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 28 01:17:30.094515 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 28 01:17:30.094525 kernel: Normal empty Jan 28 01:17:30.094646 kernel: Device empty Jan 28 01:17:30.094657 kernel: Movable zone start for each node Jan 28 01:17:30.094667 kernel: Early memory node ranges Jan 28 01:17:30.094687 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 28 01:17:30.094697 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 28 01:17:30.094707 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 28 01:17:30.094717 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 28 01:17:30.094727 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 28 01:17:30.094737 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 28 01:17:30.094747 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 28 01:17:30.094759 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 28 01:17:30.094776 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 28 01:17:30.094787 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 01:17:30.094808 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 28 01:17:30.094821 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 28 01:17:30.094834 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 01:17:30.094848 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 28 01:17:30.094860 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 28 01:17:30.094870 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 28 01:17:30.094881 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 28 01:17:30.094896 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 28 01:17:30.094911 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 28 01:17:30.094925 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 28 01:17:30.094936 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 28 01:17:30.094951 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 28 01:17:30.094961 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 28 01:17:30.094973 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 28 01:17:30.094986 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 28 01:17:30.095001 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 28 01:17:30.095013 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 28 01:17:30.095024 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 28 01:17:30.095038 kernel: TSC deadline timer available Jan 28 01:17:30.095049 kernel: CPU topo: Max. logical packages: 1 Jan 28 01:17:30.095063 kernel: CPU topo: Max. logical dies: 1 Jan 28 01:17:30.095074 kernel: CPU topo: Max. dies per package: 1 Jan 28 01:17:30.095084 kernel: CPU topo: Max. threads per core: 1 Jan 28 01:17:30.095095 kernel: CPU topo: Num. cores per package: 4 Jan 28 01:17:30.095168 kernel: CPU topo: Num. threads per package: 4 Jan 28 01:17:30.095181 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 28 01:17:30.095196 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 28 01:17:30.095208 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 28 01:17:30.095219 kernel: kvm-guest: setup PV sched yield Jan 28 01:17:30.095230 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 28 01:17:30.095240 kernel: Booting paravirtualized kernel on KVM Jan 28 01:17:30.095252 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 28 01:17:30.095263 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 28 01:17:30.102226 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 28 01:17:30.102248 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 28 01:17:30.102260 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 28 01:17:30.102272 kernel: kvm-guest: PV spinlocks enabled Jan 28 01:17:30.102283 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 28 01:17:30.102297 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=230bd1064bb7a0684ba668d5ea4a2f2b71590a34541eb9a6374c5e871e67bba4 Jan 28 01:17:30.102309 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 01:17:30.102329 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 01:17:30.102340 kernel: Fallback order for Node 0: 0 Jan 28 01:17:30.102351 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 28 01:17:30.102363 kernel: Policy zone: DMA32 Jan 28 01:17:30.102374 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 01:17:30.102384 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 28 01:17:30.102397 kernel: ftrace: allocating 40097 entries in 157 pages Jan 28 01:17:30.102415 kernel: ftrace: allocated 157 pages with 5 groups Jan 28 01:17:30.102428 kernel: Dynamic Preempt: voluntary Jan 28 01:17:30.102441 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 01:17:30.102463 kernel: rcu: RCU event tracing is enabled. Jan 28 01:17:30.102477 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 28 01:17:30.102490 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 01:17:30.102503 kernel: Rude variant of Tasks RCU enabled. Jan 28 01:17:30.102521 kernel: Tracing variant of Tasks RCU enabled. Jan 28 01:17:30.102644 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 01:17:30.102660 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 28 01:17:30.102674 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:17:30.102687 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:17:30.102700 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:17:30.102714 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 28 01:17:30.102731 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 01:17:30.102742 kernel: Console: colour dummy device 80x25 Jan 28 01:17:30.102752 kernel: printk: legacy console [ttyS0] enabled Jan 28 01:17:30.102763 kernel: ACPI: Core revision 20240827 Jan 28 01:17:30.102778 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 28 01:17:30.102789 kernel: APIC: Switch to symmetric I/O mode setup Jan 28 01:17:30.102799 kernel: x2apic enabled Jan 28 01:17:30.102814 kernel: APIC: Switched APIC routing to: physical x2apic Jan 28 01:17:30.102826 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 28 01:17:30.102840 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 28 01:17:30.102853 kernel: kvm-guest: setup PV IPIs Jan 28 01:17:30.102864 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 28 01:17:30.102874 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 28 01:17:30.102885 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 28 01:17:30.102901 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 28 01:17:30.102914 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 28 01:17:30.102926 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 28 01:17:30.102939 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 28 01:17:30.102952 kernel: Spectre V2 : Mitigation: Retpolines Jan 28 01:17:30.102966 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 28 01:17:30.102978 kernel: Speculative Store Bypass: Vulnerable Jan 28 01:17:30.102994 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 28 01:17:30.103006 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 28 01:17:30.103017 kernel: active return thunk: srso_alias_return_thunk Jan 28 01:17:30.103030 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 28 01:17:30.103042 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 28 01:17:30.103053 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 28 01:17:30.103063 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 28 01:17:30.103078 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 28 01:17:30.103091 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 28 01:17:30.105882 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 28 01:17:30.105906 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 28 01:17:30.105919 kernel: Freeing SMP alternatives memory: 32K Jan 28 01:17:30.105930 kernel: pid_max: default: 32768 minimum: 301 Jan 28 01:17:30.105942 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 28 01:17:30.105963 kernel: landlock: Up and running. Jan 28 01:17:30.105975 kernel: SELinux: Initializing. Jan 28 01:17:30.105986 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:17:30.105997 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:17:30.106007 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 28 01:17:30.106018 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 28 01:17:30.106031 kernel: signal: max sigframe size: 1776 Jan 28 01:17:30.106048 kernel: rcu: Hierarchical SRCU implementation. Jan 28 01:17:30.106060 kernel: rcu: Max phase no-delay instances is 400. Jan 28 01:17:30.106071 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 28 01:17:30.106082 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 28 01:17:30.106095 kernel: smp: Bringing up secondary CPUs ... Jan 28 01:17:30.106169 kernel: smpboot: x86: Booting SMP configuration: Jan 28 01:17:30.106184 kernel: .... node #0, CPUs: #1 #2 #3 Jan 28 01:17:30.106199 kernel: smp: Brought up 1 node, 4 CPUs Jan 28 01:17:30.106210 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 28 01:17:30.106222 kernel: Memory: 2439044K/2565800K available (14336K kernel code, 2445K rwdata, 31636K rodata, 15536K init, 2504K bss, 120820K reserved, 0K cma-reserved) Jan 28 01:17:30.106233 kernel: devtmpfs: initialized Jan 28 01:17:30.106246 kernel: x86/mm: Memory block size: 128MB Jan 28 01:17:30.106258 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 28 01:17:30.106269 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 28 01:17:30.106284 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 28 01:17:30.106296 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 28 01:17:30.106310 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 28 01:17:30.106321 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 28 01:17:30.106333 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 01:17:30.106343 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 28 01:17:30.106354 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 01:17:30.106372 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 01:17:30.106382 kernel: audit: initializing netlink subsys (disabled) Jan 28 01:17:30.106393 kernel: audit: type=2000 audit(1769563016.755:1): state=initialized audit_enabled=0 res=1 Jan 28 01:17:30.106404 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 01:17:30.106417 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 28 01:17:30.106429 kernel: cpuidle: using governor menu Jan 28 01:17:30.106442 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 01:17:30.106459 kernel: dca service started, version 1.12.1 Jan 28 01:17:30.106471 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 28 01:17:30.106483 kernel: PCI: Using configuration type 1 for base access Jan 28 01:17:30.106495 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 28 01:17:30.106506 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 01:17:30.106518 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 01:17:30.106646 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 01:17:30.106667 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 01:17:30.106680 kernel: ACPI: Added _OSI(Module Device) Jan 28 01:17:30.106692 kernel: ACPI: Added _OSI(Processor Device) Jan 28 01:17:30.106703 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 01:17:30.106716 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 01:17:30.106727 kernel: ACPI: Interpreter enabled Jan 28 01:17:30.106739 kernel: ACPI: PM: (supports S0 S3 S5) Jan 28 01:17:30.106755 kernel: ACPI: Using IOAPIC for interrupt routing Jan 28 01:17:30.106768 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 28 01:17:30.106780 kernel: PCI: Using E820 reservations for host bridge windows Jan 28 01:17:30.106792 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 28 01:17:30.106804 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 28 01:17:30.107261 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 28 01:17:30.107640 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 28 01:17:30.107919 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 28 01:17:30.107939 kernel: PCI host bridge to bus 0000:00 Jan 28 01:17:30.108262 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 28 01:17:30.108495 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 28 01:17:30.108831 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 28 01:17:30.109073 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 28 01:17:30.109513 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 28 01:17:30.117759 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 28 01:17:30.118035 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 28 01:17:30.120163 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 28 01:17:30.120445 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 28 01:17:30.120876 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 28 01:17:30.121200 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 28 01:17:30.121465 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 28 01:17:30.121838 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 28 01:17:30.122175 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 28 01:17:30.122431 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 28 01:17:30.122774 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 28 01:17:30.123029 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 28 01:17:30.127946 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 28 01:17:30.128280 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 28 01:17:30.132291 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 28 01:17:30.133726 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 28 01:17:30.134007 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 28 01:17:30.134357 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 28 01:17:30.134750 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 28 01:17:30.135017 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 28 01:17:30.136940 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 28 01:17:30.137301 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 28 01:17:30.139203 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 28 01:17:30.139474 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 14648 usecs Jan 28 01:17:30.141031 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 28 01:17:30.141355 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 28 01:17:30.144775 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 28 01:17:30.145057 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 28 01:17:30.145505 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 28 01:17:30.145528 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 28 01:17:30.145656 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 28 01:17:30.145670 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 28 01:17:30.145682 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 28 01:17:30.145702 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 28 01:17:30.145714 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 28 01:17:30.145727 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 28 01:17:30.145739 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 28 01:17:30.145751 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 28 01:17:30.145763 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 28 01:17:30.145776 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 28 01:17:30.145791 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 28 01:17:30.145804 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 28 01:17:30.145816 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 28 01:17:30.145828 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 28 01:17:30.145840 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 28 01:17:30.145853 kernel: iommu: Default domain type: Translated Jan 28 01:17:30.145865 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 28 01:17:30.145881 kernel: efivars: Registered efivars operations Jan 28 01:17:30.145893 kernel: PCI: Using ACPI for IRQ routing Jan 28 01:17:30.145904 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 28 01:17:30.145916 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 28 01:17:30.145928 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 28 01:17:30.145940 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 28 01:17:30.145952 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 28 01:17:30.145968 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 28 01:17:30.145980 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 28 01:17:30.145993 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 28 01:17:30.146007 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 28 01:17:30.146328 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 28 01:17:30.146674 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 28 01:17:30.146932 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 28 01:17:30.146952 kernel: vgaarb: loaded Jan 28 01:17:30.146967 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 28 01:17:30.146980 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 28 01:17:30.146992 kernel: clocksource: Switched to clocksource kvm-clock Jan 28 01:17:30.147006 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 01:17:30.147020 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 01:17:30.147037 kernel: pnp: PnP ACPI init Jan 28 01:17:30.147448 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 28 01:17:30.147472 kernel: pnp: PnP ACPI: found 6 devices Jan 28 01:17:30.147485 kernel: hrtimer: interrupt took 7590421 ns Jan 28 01:17:30.147499 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 28 01:17:30.147512 kernel: NET: Registered PF_INET protocol family Jan 28 01:17:30.147524 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 01:17:30.147655 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 01:17:30.147690 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 01:17:30.147708 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 01:17:30.147721 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 01:17:30.147736 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 01:17:30.147748 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:17:30.147759 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:17:30.147778 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 01:17:30.147792 kernel: NET: Registered PF_XDP protocol family Jan 28 01:17:30.148065 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 28 01:17:30.150474 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 28 01:17:30.151982 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 28 01:17:30.152305 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 28 01:17:30.155183 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 28 01:17:30.155440 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 28 01:17:30.155782 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 28 01:17:30.156024 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 28 01:17:30.156044 kernel: PCI: CLS 0 bytes, default 64 Jan 28 01:17:30.156057 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 28 01:17:30.156070 kernel: Initialise system trusted keyrings Jan 28 01:17:30.156091 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 01:17:30.156181 kernel: Key type asymmetric registered Jan 28 01:17:30.156197 kernel: Asymmetric key parser 'x509' registered Jan 28 01:17:30.156209 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 28 01:17:30.156223 kernel: io scheduler mq-deadline registered Jan 28 01:17:30.156237 kernel: io scheduler kyber registered Jan 28 01:17:30.156251 kernel: io scheduler bfq registered Jan 28 01:17:30.156270 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 28 01:17:30.156282 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 28 01:17:30.156294 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 28 01:17:30.156312 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 28 01:17:30.156330 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 01:17:30.156345 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 01:17:30.156357 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 28 01:17:30.156368 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 28 01:17:30.156379 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 28 01:17:30.156760 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 28 01:17:30.156785 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 28 01:17:30.157045 kernel: rtc_cmos 00:04: registered as rtc0 Jan 28 01:17:30.161187 kernel: rtc_cmos 00:04: setting system clock to 2026-01-28T01:17:17 UTC (1769563037) Jan 28 01:17:30.161431 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 28 01:17:30.161453 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 28 01:17:30.161465 kernel: efifb: probing for efifb Jan 28 01:17:30.161477 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 28 01:17:30.161488 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 28 01:17:30.161509 kernel: efifb: scrolling: redraw Jan 28 01:17:30.161521 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 28 01:17:30.163789 kernel: Console: switching to colour frame buffer device 160x50 Jan 28 01:17:30.163806 kernel: fb0: EFI VGA frame buffer device Jan 28 01:17:30.163818 kernel: pstore: Using crash dump compression: deflate Jan 28 01:17:30.163831 kernel: pstore: Registered efi_pstore as persistent store backend Jan 28 01:17:30.163843 kernel: NET: Registered PF_INET6 protocol family Jan 28 01:17:30.163862 kernel: Segment Routing with IPv6 Jan 28 01:17:30.163874 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 01:17:30.163889 kernel: NET: Registered PF_PACKET protocol family Jan 28 01:17:30.163901 kernel: Key type dns_resolver registered Jan 28 01:17:30.163913 kernel: IPI shorthand broadcast: enabled Jan 28 01:17:30.163926 kernel: sched_clock: Marking stable (16114311769, 3909379858)->(24206178475, -4182486848) Jan 28 01:17:30.163939 kernel: registered taskstats version 1 Jan 28 01:17:30.163955 kernel: Loading compiled-in X.509 certificates Jan 28 01:17:30.163967 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: e20b9b58c2206ebaa16c4a71db244a0e01a2e623' Jan 28 01:17:30.163979 kernel: Demotion targets for Node 0: null Jan 28 01:17:30.163992 kernel: Key type .fscrypt registered Jan 28 01:17:30.164006 kernel: Key type fscrypt-provisioning registered Jan 28 01:17:30.164020 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 01:17:30.164031 kernel: ima: Allocated hash algorithm: sha1 Jan 28 01:17:30.164046 kernel: ima: No architecture policies found Jan 28 01:17:30.164057 kernel: clk: Disabling unused clocks Jan 28 01:17:30.164070 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 28 01:17:30.164085 kernel: Write protecting the kernel read-only data: 47104k Jan 28 01:17:30.164097 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Jan 28 01:17:30.165245 kernel: Run /init as init process Jan 28 01:17:30.165264 kernel: with arguments: Jan 28 01:17:30.165283 kernel: /init Jan 28 01:17:30.165295 kernel: with environment: Jan 28 01:17:30.165306 kernel: HOME=/ Jan 28 01:17:30.165318 kernel: TERM=linux Jan 28 01:17:30.165331 kernel: SCSI subsystem initialized Jan 28 01:17:30.165343 kernel: libata version 3.00 loaded. Jan 28 01:17:30.165980 kernel: ahci 0000:00:1f.2: version 3.0 Jan 28 01:17:30.166007 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 28 01:17:30.166335 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 28 01:17:30.167013 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 28 01:17:30.170484 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 28 01:17:30.170861 kernel: scsi host0: ahci Jan 28 01:17:30.171202 kernel: scsi host1: ahci Jan 28 01:17:30.171491 kernel: scsi host2: ahci Jan 28 01:17:30.172504 kernel: scsi host3: ahci Jan 28 01:17:30.172869 kernel: scsi host4: ahci Jan 28 01:17:30.173200 kernel: scsi host5: ahci Jan 28 01:17:30.173223 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Jan 28 01:17:30.173240 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Jan 28 01:17:30.173259 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Jan 28 01:17:30.173271 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Jan 28 01:17:30.173282 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Jan 28 01:17:30.173293 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Jan 28 01:17:30.173306 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 28 01:17:30.173319 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 28 01:17:30.173333 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 28 01:17:30.173351 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 28 01:17:30.173365 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 28 01:17:30.173376 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 28 01:17:30.173387 kernel: ata3.00: LPM support broken, forcing max_power Jan 28 01:17:30.173398 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 28 01:17:30.173409 kernel: ata3.00: applying bridge limits Jan 28 01:17:30.173424 kernel: ata3.00: LPM support broken, forcing max_power Jan 28 01:17:30.173444 kernel: ata3.00: configured for UDMA/100 Jan 28 01:17:30.173828 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 28 01:17:30.174089 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 28 01:17:30.174893 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 28 01:17:30.175216 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 28 01:17:30.175243 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 01:17:30.175256 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 28 01:17:30.175267 kernel: GPT:16515071 != 27000831 Jan 28 01:17:30.175279 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 28 01:17:30.175289 kernel: GPT:16515071 != 27000831 Jan 28 01:17:30.175304 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 01:17:30.175315 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 01:17:30.175897 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 28 01:17:30.175922 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 01:17:30.175935 kernel: device-mapper: uevent: version 1.0.3 Jan 28 01:17:30.175953 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 28 01:17:30.175965 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 28 01:17:30.175976 kernel: raid6: avx2x4 gen() 22406 MB/s Jan 28 01:17:30.175987 kernel: raid6: avx2x2 gen() 18417 MB/s Jan 28 01:17:30.176001 kernel: raid6: avx2x1 gen() 8503 MB/s Jan 28 01:17:30.176016 kernel: raid6: using algorithm avx2x4 gen() 22406 MB/s Jan 28 01:17:30.176028 kernel: raid6: .... xor() 2402 MB/s, rmw enabled Jan 28 01:17:30.176039 kernel: raid6: using avx2x2 recovery algorithm Jan 28 01:17:30.176054 kernel: xor: automatically using best checksumming function avx Jan 28 01:17:30.176065 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 01:17:30.176077 kernel: BTRFS: device fsid 34b0c34a-a205-4a5e-b928-fc41d62e7a91 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (182) Jan 28 01:17:30.176089 kernel: BTRFS info (device dm-0): first mount of filesystem 34b0c34a-a205-4a5e-b928-fc41d62e7a91 Jan 28 01:17:30.176161 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:17:30.176178 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 01:17:30.176190 kernel: BTRFS info (device dm-0): enabling free space tree Jan 28 01:17:30.176201 kernel: loop: module loaded Jan 28 01:17:30.176212 kernel: loop0: detected capacity change from 0 to 100552 Jan 28 01:17:30.176223 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 01:17:30.176239 systemd[1]: Successfully made /usr/ read-only. Jan 28 01:17:30.176261 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 01:17:30.176274 systemd[1]: Detected virtualization kvm. Jan 28 01:17:30.176285 systemd[1]: Detected architecture x86-64. Jan 28 01:17:30.176298 systemd[1]: Running in initrd. Jan 28 01:17:30.176313 systemd[1]: No hostname configured, using default hostname. Jan 28 01:17:30.176331 systemd[1]: Hostname set to . Jan 28 01:17:30.176343 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 28 01:17:30.176354 systemd[1]: Queued start job for default target initrd.target. Jan 28 01:17:30.176366 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:17:30.176729 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:17:30.176745 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:17:30.176759 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 01:17:30.176776 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:17:30.176789 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 01:17:30.176804 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 01:17:30.176816 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:17:30.176829 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:17:30.176844 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 28 01:17:30.176859 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:17:30.176872 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:17:30.176884 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:17:30.176896 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:17:30.176911 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:17:30.176923 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:17:30.176938 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 28 01:17:30.176951 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 01:17:30.176965 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 28 01:17:30.176978 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:17:30.176990 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:17:30.177004 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:17:30.177016 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:17:30.177032 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 01:17:30.177045 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 01:17:30.177060 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:17:30.177072 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 01:17:30.177085 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 28 01:17:30.177097 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 01:17:30.177165 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:17:30.177183 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:17:30.177197 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:17:30.177211 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 01:17:30.177719 systemd-journald[321]: Collecting audit messages is enabled. Jan 28 01:17:30.177759 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:17:30.177776 kernel: audit: type=1130 audit(1769563050.083:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:30.177797 systemd-journald[321]: Journal started Jan 28 01:17:30.177832 systemd-journald[321]: Runtime Journal (/run/log/journal/4b228846b6c5498a92b14fe4f303af76) is 6M, max 48M, 42M free. Jan 28 01:17:30.226331 kernel: audit: type=1130 audit(1769563050.178:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:30.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:30.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:30.240748 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 01:17:30.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:30.345678 kernel: audit: type=1130 audit(1769563050.267:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:30.345780 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:17:30.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:30.423685 kernel: audit: type=1130 audit(1769563050.365:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:30.445719 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:17:30.474318 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:17:30.689105 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:17:30.794184 kernel: audit: type=1130 audit(1769563050.716:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:30.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:30.733824 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:17:31.005711 systemd-tmpfiles[333]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 28 01:17:31.063463 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:17:31.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:31.079412 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:17:31.193870 kernel: audit: type=1130 audit(1769563051.061:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:31.257484 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:17:31.304079 kernel: audit: type=1130 audit(1769563051.270:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:31.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:31.306315 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:17:31.421755 kernel: audit: type=1130 audit(1769563051.354:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:31.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:31.379513 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 01:17:31.481045 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 01:17:31.489020 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:17:31.574029 kernel: Bridge firewalling registered Jan 28 01:17:31.574074 kernel: audit: type=1130 audit(1769563051.525:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:31.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:31.530729 systemd-modules-load[323]: Inserted module 'br_netfilter' Jan 28 01:17:31.623425 kernel: audit: type=1130 audit(1769563051.585:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:31.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:31.534226 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:17:31.636497 dracut-cmdline[352]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=230bd1064bb7a0684ba668d5ea4a2f2b71590a34541eb9a6374c5e871e67bba4 Jan 28 01:17:31.637659 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:17:31.794335 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:17:31.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:31.803000 audit: BPF prog-id=6 op=LOAD Jan 28 01:17:31.807473 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:17:32.083479 systemd-resolved[399]: Positive Trust Anchors: Jan 28 01:17:32.083843 systemd-resolved[399]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:17:32.083849 systemd-resolved[399]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 28 01:17:32.083890 systemd-resolved[399]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:17:32.273686 systemd-resolved[399]: Defaulting to hostname 'linux'. Jan 28 01:17:32.292730 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:17:32.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:32.330766 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:17:32.724890 kernel: Loading iSCSI transport class v2.0-870. Jan 28 01:17:32.853083 kernel: iscsi: registered transport (tcp) Jan 28 01:17:32.972353 kernel: iscsi: registered transport (qla4xxx) Jan 28 01:17:32.978340 kernel: QLogic iSCSI HBA Driver Jan 28 01:17:33.302071 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 01:17:34.109839 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:17:34.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:34.247092 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 01:17:35.859869 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 01:17:35.987359 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 28 01:17:35.987516 kernel: audit: type=1130 audit(1769563055.897:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:35.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:36.027003 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 01:17:36.089651 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 01:17:36.694333 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:17:36.890311 kernel: audit: type=1130 audit(1769563056.716:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:36.890445 kernel: audit: type=1334 audit(1769563056.751:18): prog-id=7 op=LOAD Jan 28 01:17:36.890465 kernel: audit: type=1334 audit(1769563056.751:19): prog-id=8 op=LOAD Jan 28 01:17:36.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:36.751000 audit: BPF prog-id=7 op=LOAD Jan 28 01:17:36.751000 audit: BPF prog-id=8 op=LOAD Jan 28 01:17:36.764226 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:17:37.052495 systemd-udevd[600]: Using default interface naming scheme 'v257'. Jan 28 01:17:37.108906 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:17:37.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:37.131724 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 01:17:37.199488 kernel: audit: type=1130 audit(1769563057.123:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:37.426965 dracut-pre-trigger[641]: rd.md=0: removing MD RAID activation Jan 28 01:17:37.647359 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:17:37.763342 kernel: audit: type=1130 audit(1769563057.678:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:37.763394 kernel: audit: type=1334 audit(1769563057.686:22): prog-id=9 op=LOAD Jan 28 01:17:37.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:37.686000 audit: BPF prog-id=9 op=LOAD Jan 28 01:17:37.716130 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:17:37.888498 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:17:38.048328 kernel: audit: type=1130 audit(1769563057.922:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:37.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:38.054671 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:17:38.593011 systemd-networkd[725]: lo: Link UP Jan 28 01:17:38.593021 systemd-networkd[725]: lo: Gained carrier Jan 28 01:17:38.636710 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:17:38.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:38.704828 kernel: audit: type=1130 audit(1769563058.666:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:38.673114 systemd[1]: Reached target network.target - Network. Jan 28 01:17:38.762334 kernel: audit: type=1130 audit(1769563058.709:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:38.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:38.713422 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:17:38.737886 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 01:17:39.312115 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 28 01:17:39.463939 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 28 01:17:39.586450 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 28 01:17:39.665464 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 01:17:39.795294 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 01:17:39.865151 kernel: cryptd: max_cpu_qlen set to 1000 Jan 28 01:17:39.872297 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:17:39.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:39.872795 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:17:39.908765 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:17:40.032495 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:17:40.213050 disk-uuid[777]: Primary Header is updated. Jan 28 01:17:40.213050 disk-uuid[777]: Secondary Entries is updated. Jan 28 01:17:40.213050 disk-uuid[777]: Secondary Header is updated. Jan 28 01:17:40.296675 systemd-networkd[725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:17:40.296688 systemd-networkd[725]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:17:40.491869 systemd-networkd[725]: eth0: Link UP Jan 28 01:17:40.529876 systemd-networkd[725]: eth0: Gained carrier Jan 28 01:17:40.529898 systemd-networkd[725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:17:40.702305 systemd-networkd[725]: eth0: DHCPv4 address 10.0.0.61/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 01:17:40.778362 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:17:40.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:40.881745 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 28 01:17:40.947715 kernel: AES CTR mode by8 optimization enabled Jan 28 01:17:41.369337 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 01:17:41.427023 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 28 01:17:41.427063 kernel: audit: type=1130 audit(1769563061.385:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:41.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:41.393222 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:17:41.470367 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:17:41.476333 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:17:41.549082 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 01:17:41.700882 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:17:41.766107 kernel: audit: type=1130 audit(1769563061.722:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:41.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:41.861516 disk-uuid[779]: Warning: The kernel is still using the old partition table. Jan 28 01:17:41.861516 disk-uuid[779]: The new table will be used at the next reboot or after you Jan 28 01:17:41.861516 disk-uuid[779]: run partprobe(8) or kpartx(8) Jan 28 01:17:41.861516 disk-uuid[779]: The operation has completed successfully. Jan 28 01:17:41.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:41.929923 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 01:17:42.036926 kernel: audit: type=1130 audit(1769563061.945:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:42.036968 kernel: audit: type=1131 audit(1769563061.945:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:41.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:41.930141 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 01:17:41.951518 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 01:17:42.119253 systemd-networkd[725]: eth0: Gained IPv6LL Jan 28 01:17:42.268432 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (868) Jan 28 01:17:42.284068 kernel: BTRFS info (device vda6): first mount of filesystem 2e7ebc59-41d8-45a2-a17d-e5d2a56a196e Jan 28 01:17:42.284139 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:17:42.342640 kernel: BTRFS info (device vda6): turning on async discard Jan 28 01:17:42.342726 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 01:17:42.374402 kernel: BTRFS info (device vda6): last unmount of filesystem 2e7ebc59-41d8-45a2-a17d-e5d2a56a196e Jan 28 01:17:42.387046 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 01:17:42.414484 kernel: audit: type=1130 audit(1769563062.391:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:42.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:42.413254 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 01:17:43.714074 ignition[887]: Ignition 2.24.0 Jan 28 01:17:43.714091 ignition[887]: Stage: fetch-offline Jan 28 01:17:43.714427 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:17:43.714453 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:17:43.725237 ignition[887]: parsed url from cmdline: "" Jan 28 01:17:43.725248 ignition[887]: no config URL provided Jan 28 01:17:43.725458 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:17:43.725491 ignition[887]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:17:43.725724 ignition[887]: op(1): [started] loading QEMU firmware config module Jan 28 01:17:43.725738 ignition[887]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 28 01:17:43.967608 ignition[887]: op(1): [finished] loading QEMU firmware config module Jan 28 01:17:45.137084 ignition[887]: parsing config with SHA512: 60f214fa2a9661e515f784bb89caaef619e69fe4d319adeae70f991880179e12e8e24f9ca531c557200dc84ed7096eb1dadef3ae41bf97c5c6c760ddc94d3303 Jan 28 01:17:45.236987 unknown[887]: fetched base config from "system" Jan 28 01:17:45.238093 ignition[887]: fetch-offline: fetch-offline passed Jan 28 01:17:45.237055 unknown[887]: fetched user config from "qemu" Jan 28 01:17:45.238364 ignition[887]: Ignition finished successfully Jan 28 01:17:45.274514 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:17:45.340137 kernel: audit: type=1130 audit(1769563065.313:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:45.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:45.314480 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 28 01:17:45.321708 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 01:17:45.660087 ignition[897]: Ignition 2.24.0 Jan 28 01:17:45.660103 ignition[897]: Stage: kargs Jan 28 01:17:45.660517 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:17:45.660647 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:17:45.706271 ignition[897]: kargs: kargs passed Jan 28 01:17:45.706367 ignition[897]: Ignition finished successfully Jan 28 01:17:45.748749 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 01:17:45.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:45.841727 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 01:17:45.974900 kernel: audit: type=1130 audit(1769563065.821:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:46.397155 ignition[905]: Ignition 2.24.0 Jan 28 01:17:46.399908 ignition[905]: Stage: disks Jan 28 01:17:46.400180 ignition[905]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:17:46.400263 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:17:46.442814 ignition[905]: disks: disks passed Jan 28 01:17:46.442910 ignition[905]: Ignition finished successfully Jan 28 01:17:46.533372 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 01:17:46.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:46.577446 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 01:17:46.626503 kernel: audit: type=1130 audit(1769563066.564:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:46.588118 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 01:17:46.632716 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:17:46.632874 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:17:46.632937 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:17:46.649757 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 01:17:46.856965 systemd-fsck[915]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 28 01:17:46.891894 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 01:17:46.961908 kernel: audit: type=1130 audit(1769563066.905:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:46.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:46.964092 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 01:17:48.501283 kernel: EXT4-fs (vda9): mounted filesystem 89ee8811-a55f-4471-b9a6-3378249aa495 r/w with ordered data mode. Quota mode: none. Jan 28 01:17:48.523814 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 01:17:48.555429 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 01:17:48.645741 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:17:48.728279 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 01:17:48.764374 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 28 01:17:48.764447 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 01:17:48.764495 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:17:49.264344 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (924) Jan 28 01:17:49.264469 kernel: BTRFS info (device vda6): first mount of filesystem 2e7ebc59-41d8-45a2-a17d-e5d2a56a196e Jan 28 01:17:49.264495 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:17:48.901811 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 01:17:49.138092 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 01:17:49.475838 kernel: BTRFS info (device vda6): turning on async discard Jan 28 01:17:49.476716 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 01:17:49.532186 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:17:50.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:50.906489 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 01:17:51.016172 kernel: audit: type=1130 audit(1769563070.932:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:50.934802 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 01:17:51.003389 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 01:17:51.066087 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 01:17:51.087640 kernel: BTRFS info (device vda6): last unmount of filesystem 2e7ebc59-41d8-45a2-a17d-e5d2a56a196e Jan 28 01:17:51.308469 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 01:17:51.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.406895 kernel: audit: type=1130 audit(1769563071.337:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.719322 ignition[1021]: INFO : Ignition 2.24.0 Jan 28 01:17:51.719322 ignition[1021]: INFO : Stage: mount Jan 28 01:17:51.767311 ignition[1021]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:17:51.767311 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:17:51.767311 ignition[1021]: INFO : mount: mount passed Jan 28 01:17:51.767311 ignition[1021]: INFO : Ignition finished successfully Jan 28 01:17:51.938282 kernel: audit: type=1130 audit(1769563071.872:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.827909 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 01:17:51.882698 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 01:17:52.116749 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:17:52.285488 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1033) Jan 28 01:17:52.285644 kernel: BTRFS info (device vda6): first mount of filesystem 2e7ebc59-41d8-45a2-a17d-e5d2a56a196e Jan 28 01:17:52.335664 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:17:52.420990 kernel: BTRFS info (device vda6): turning on async discard Jan 28 01:17:52.421423 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 01:17:52.457442 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:17:52.855971 ignition[1050]: INFO : Ignition 2.24.0 Jan 28 01:17:52.855971 ignition[1050]: INFO : Stage: files Jan 28 01:17:52.876287 ignition[1050]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:17:52.876287 ignition[1050]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:17:52.876287 ignition[1050]: DEBUG : files: compiled without relabeling support, skipping Jan 28 01:17:52.994413 ignition[1050]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 01:17:52.994413 ignition[1050]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 01:17:53.144179 ignition[1050]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 01:17:53.144179 ignition[1050]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 01:17:53.144179 ignition[1050]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 01:17:53.143469 unknown[1050]: wrote ssh authorized keys file for user: core Jan 28 01:17:53.238043 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 28 01:17:53.260349 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 28 01:17:53.660059 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 01:17:54.759709 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 28 01:17:54.759709 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 28 01:17:54.839912 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 01:17:54.839912 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:17:54.839912 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:17:54.839912 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:17:54.839912 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:17:54.839912 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:17:54.839912 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:17:54.839912 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:17:54.839912 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:17:54.839912 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 28 01:17:54.839912 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 28 01:17:54.839912 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 28 01:17:54.839912 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 28 01:17:55.631328 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 28 01:18:01.061808 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 28 01:18:01.061808 ignition[1050]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 28 01:18:01.108001 ignition[1050]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:18:01.108001 ignition[1050]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:18:01.108001 ignition[1050]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 28 01:18:01.108001 ignition[1050]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 28 01:18:01.108001 ignition[1050]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 01:18:01.108001 ignition[1050]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 01:18:01.108001 ignition[1050]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 28 01:18:01.108001 ignition[1050]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 28 01:18:01.347337 ignition[1050]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 01:18:01.377082 ignition[1050]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 01:18:01.377082 ignition[1050]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 28 01:18:01.377082 ignition[1050]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 28 01:18:01.377082 ignition[1050]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 01:18:01.377082 ignition[1050]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:18:01.377082 ignition[1050]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:18:01.377082 ignition[1050]: INFO : files: files passed Jan 28 01:18:01.377082 ignition[1050]: INFO : Ignition finished successfully Jan 28 01:18:01.704933 kernel: audit: type=1130 audit(1769563081.391:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:01.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:01.373341 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 01:18:01.405457 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 01:18:01.671109 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 01:18:01.752989 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 01:18:01.757398 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 01:18:01.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:01.906941 kernel: audit: type=1130 audit(1769563081.848:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:01.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:01.927344 initrd-setup-root-after-ignition[1081]: grep: /sysroot/oem/oem-release: No such file or directory Jan 28 01:18:01.997795 kernel: audit: type=1131 audit(1769563081.848:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:01.997926 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:18:02.023109 initrd-setup-root-after-ignition[1083]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:18:02.023109 initrd-setup-root-after-ignition[1083]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:18:02.063655 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:18:02.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:02.154758 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 01:18:02.215073 kernel: audit: type=1130 audit(1769563082.147:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:02.216167 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 01:18:03.759136 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 01:18:03.760454 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 01:18:03.900238 kernel: audit: type=1130 audit(1769563083.803:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:03.900468 kernel: audit: type=1131 audit(1769563083.803:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:03.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:03.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:03.851066 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 01:18:03.908659 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 01:18:03.988203 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 01:18:04.002167 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 01:18:04.284178 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:18:04.359878 kernel: audit: type=1130 audit(1769563084.301:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:04.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:04.327813 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 01:18:04.457044 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:18:04.626497 kernel: audit: type=1131 audit(1769563084.553:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:04.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:04.457482 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:18:04.474800 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:18:04.483032 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 01:18:04.510885 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 01:18:04.511124 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:18:04.630900 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 01:18:04.678918 systemd[1]: Stopped target basic.target - Basic System. Jan 28 01:18:04.704460 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 01:18:04.842225 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:18:04.950237 kernel: audit: type=1131 audit(1769563084.903:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:04.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:04.868045 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 01:18:04.868412 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 28 01:18:04.902791 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 01:18:04.903101 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:18:04.903270 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 01:18:04.903491 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 01:18:04.903744 systemd[1]: Stopped target swap.target - Swaps. Jan 28 01:18:04.903858 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 01:18:04.904067 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:18:04.904487 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:18:04.904758 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:18:04.904850 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 01:18:04.913011 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:18:05.322152 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 01:18:05.325392 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 01:18:05.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:05.455917 kernel: audit: type=1131 audit(1769563085.402:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:05.403844 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 01:18:05.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:05.404060 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:18:05.492470 systemd[1]: Stopped target paths.target - Path Units. Jan 28 01:18:05.597112 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 01:18:05.615517 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:18:05.728839 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 01:18:05.759823 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 01:18:05.796124 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 01:18:05.805977 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:18:05.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:05.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:05.867376 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 01:18:05.867639 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:18:05.867815 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 28 01:18:05.867919 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 28 01:18:06.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:05.868194 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 01:18:05.868438 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:18:05.868813 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 01:18:05.868952 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 01:18:06.488772 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 28 01:18:06.488816 kernel: audit: type=1131 audit(1769563086.402:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:06.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:05.901795 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 01:18:06.090669 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 01:18:06.091838 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:18:06.132941 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 01:18:06.707493 kernel: audit: type=1131 audit(1769563086.616:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:06.707528 kernel: audit: type=1131 audit(1769563086.670:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:06.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:06.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:06.250970 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 01:18:06.728941 ignition[1107]: INFO : Ignition 2.24.0 Jan 28 01:18:06.728941 ignition[1107]: INFO : Stage: umount Jan 28 01:18:06.728941 ignition[1107]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:18:06.728941 ignition[1107]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:18:06.728941 ignition[1107]: INFO : umount: umount passed Jan 28 01:18:06.728941 ignition[1107]: INFO : Ignition finished successfully Jan 28 01:18:06.795473 kernel: audit: type=1131 audit(1769563086.747:57): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:06.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:06.255384 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:18:06.411206 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 01:18:06.425887 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:18:06.617833 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 01:18:06.618030 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:18:06.722675 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 01:18:06.726815 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 01:18:06.727025 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 01:18:06.749462 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 01:18:06.749683 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 01:18:07.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:07.096052 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 01:18:07.128082 kernel: audit: type=1131 audit(1769563087.072:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:07.096390 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 01:18:07.260994 kernel: audit: type=1130 audit(1769563087.163:59): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:07.261036 kernel: audit: type=1131 audit(1769563087.163:60): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:07.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:07.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:07.188416 systemd[1]: Stopped target network.target - Network. Jan 28 01:18:07.645826 kernel: audit: type=1131 audit(1769563087.338:61): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:07.645882 kernel: audit: type=1131 audit(1769563087.430:62): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:07.645903 kernel: audit: type=1131 audit(1769563087.486:63): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:07.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:07.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:07.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:07.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:07.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:07.307022 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 01:18:07.307163 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 01:18:07.339170 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 01:18:07.339276 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 01:18:07.434208 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 01:18:07.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:07.434770 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 01:18:07.487120 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 01:18:07.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:07.851000 audit: BPF prog-id=6 op=UNLOAD Jan 28 01:18:07.487226 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 01:18:07.866000 audit: BPF prog-id=9 op=UNLOAD Jan 28 01:18:07.550997 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 01:18:07.551131 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 01:18:07.563841 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 01:18:07.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:07.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:07.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:07.592759 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 01:18:07.731394 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 01:18:07.731937 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 01:18:07.811098 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 01:18:07.811283 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 01:18:08.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:07.852906 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 28 01:18:07.878035 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 01:18:07.878130 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:18:07.893828 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 01:18:07.935772 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 01:18:08.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:08.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:07.935906 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:18:07.952775 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 01:18:07.952882 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:18:07.956043 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 01:18:07.956125 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 01:18:07.956263 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:18:08.104266 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 01:18:08.104880 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:18:08.149146 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 01:18:08.149379 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 01:18:08.209449 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 01:18:08.209526 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:18:08.235423 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 01:18:08.235642 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:18:08.279256 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 01:18:08.279426 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 01:18:08.469843 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:18:08.474358 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:18:08.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:08.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:08.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:08.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:08.501125 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 01:18:08.501176 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 28 01:18:08.501251 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:18:08.501460 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 01:18:08.501522 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:18:08.501700 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:18:08.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:08.501768 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:18:08.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:08.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:08.662675 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 01:18:08.662915 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 01:18:08.704526 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 01:18:08.706092 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 01:18:08.748500 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 01:18:08.783399 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 01:18:08.921730 systemd[1]: Switching root. Jan 28 01:18:09.051428 systemd-journald[321]: Received SIGTERM from PID 1 (systemd). Jan 28 01:18:09.051855 systemd-journald[321]: Journal stopped Jan 28 01:18:16.133657 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 01:18:16.133753 kernel: SELinux: policy capability open_perms=1 Jan 28 01:18:16.133782 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 01:18:16.133809 kernel: SELinux: policy capability always_check_network=0 Jan 28 01:18:16.133827 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 01:18:16.133847 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 01:18:16.133862 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 01:18:16.133984 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 01:18:16.134012 kernel: SELinux: policy capability userspace_initial_context=0 Jan 28 01:18:16.134031 systemd[1]: Successfully loaded SELinux policy in 322.891ms. Jan 28 01:18:16.134062 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 44.014ms. Jan 28 01:18:16.134084 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 01:18:16.134252 systemd[1]: Detected virtualization kvm. Jan 28 01:18:16.134408 systemd[1]: Detected architecture x86-64. Jan 28 01:18:16.134694 systemd[1]: Detected first boot. Jan 28 01:18:16.134725 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 28 01:18:16.134745 zram_generator::config[1151]: No configuration found. Jan 28 01:18:16.134826 kernel: Guest personality initialized and is inactive Jan 28 01:18:16.134844 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 28 01:18:16.134866 kernel: Initialized host personality Jan 28 01:18:16.134882 kernel: NET: Registered PF_VSOCK protocol family Jan 28 01:18:16.134904 systemd[1]: Populated /etc with preset unit settings. Jan 28 01:18:16.134922 kernel: kauditd_printk_skb: 25 callbacks suppressed Jan 28 01:18:16.134940 kernel: audit: type=1334 audit(1769563093.161:89): prog-id=12 op=LOAD Jan 28 01:18:16.135030 kernel: audit: type=1334 audit(1769563093.165:90): prog-id=3 op=UNLOAD Jan 28 01:18:16.135052 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 01:18:16.135074 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 01:18:16.135091 kernel: audit: type=1334 audit(1769563093.165:91): prog-id=13 op=LOAD Jan 28 01:18:16.135108 kernel: audit: type=1334 audit(1769563093.165:92): prog-id=14 op=LOAD Jan 28 01:18:16.135125 kernel: audit: type=1334 audit(1769563093.166:93): prog-id=4 op=UNLOAD Jan 28 01:18:16.135143 kernel: audit: type=1334 audit(1769563093.166:94): prog-id=5 op=UNLOAD Jan 28 01:18:16.135229 kernel: audit: type=1131 audit(1769563093.174:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.135249 kernel: audit: type=1334 audit(1769563093.266:96): prog-id=12 op=UNLOAD Jan 28 01:18:16.135267 kernel: audit: type=1130 audit(1769563093.301:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.135285 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 01:18:16.135303 kernel: audit: type=1131 audit(1769563093.301:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.135441 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 01:18:16.135522 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 01:18:16.135651 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 01:18:16.135676 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 01:18:16.135695 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 01:18:16.135716 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 01:18:16.135737 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 01:18:16.136447 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 01:18:16.136475 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:18:16.136496 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:18:16.136517 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 01:18:16.136654 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 01:18:16.136679 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 01:18:16.136770 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:18:16.136869 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 28 01:18:16.136894 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:18:16.136913 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:18:16.136933 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 01:18:16.136952 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 01:18:16.136972 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 01:18:16.136991 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 01:18:16.137075 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:18:16.137258 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:18:16.137279 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 28 01:18:16.137301 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:18:16.137322 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:18:16.137403 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 01:18:16.137429 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 01:18:16.137512 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 28 01:18:16.137626 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 28 01:18:16.137650 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 28 01:18:16.137671 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:18:16.137694 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 28 01:18:16.137716 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 28 01:18:16.137736 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:18:16.137756 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:18:16.137844 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 01:18:16.137869 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 01:18:16.137890 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 01:18:16.137909 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 01:18:16.137929 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:18:16.137949 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 01:18:16.138036 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 01:18:16.138060 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 01:18:16.138083 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 01:18:16.138104 systemd[1]: Reached target machines.target - Containers. Jan 28 01:18:16.138123 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 01:18:16.138144 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:18:16.138163 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:18:16.138250 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 01:18:16.138274 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:18:16.138296 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:18:16.138315 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:18:16.138339 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 01:18:16.138425 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:18:16.138444 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 01:18:16.138633 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 01:18:16.138660 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 01:18:16.138678 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 01:18:16.138695 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 01:18:16.138713 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 01:18:16.138732 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:18:16.138817 kernel: ACPI: bus type drm_connector registered Jan 28 01:18:16.138837 kernel: fuse: init (API version 7.41) Jan 28 01:18:16.138854 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:18:16.138872 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 01:18:16.138890 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 01:18:16.138977 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 28 01:18:16.139000 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:18:16.139021 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:18:16.139037 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 01:18:16.139096 systemd-journald[1238]: Collecting audit messages is enabled. Jan 28 01:18:16.139212 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 01:18:16.139236 systemd-journald[1238]: Journal started Jan 28 01:18:16.139269 systemd-journald[1238]: Runtime Journal (/run/log/journal/4b228846b6c5498a92b14fe4f303af76) is 6M, max 48M, 42M free. Jan 28 01:18:14.311000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 28 01:18:15.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:15.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:15.591000 audit: BPF prog-id=14 op=UNLOAD Jan 28 01:18:15.591000 audit: BPF prog-id=13 op=UNLOAD Jan 28 01:18:15.608000 audit: BPF prog-id=15 op=LOAD Jan 28 01:18:15.618000 audit: BPF prog-id=16 op=LOAD Jan 28 01:18:15.620000 audit: BPF prog-id=17 op=LOAD Jan 28 01:18:16.128000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 28 01:18:16.128000 audit[1238]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd26b98040 a2=4000 a3=0 items=0 ppid=1 pid=1238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:16.128000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 28 01:18:13.120224 systemd[1]: Queued start job for default target multi-user.target. Jan 28 01:18:13.167500 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 28 01:18:13.174628 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 01:18:13.175694 systemd[1]: systemd-journald.service: Consumed 2.944s CPU time. Jan 28 01:18:16.171402 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:18:16.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.189705 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 01:18:16.196815 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 01:18:16.204499 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 01:18:16.222948 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 01:18:16.240746 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 01:18:16.262440 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:18:16.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.277968 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 01:18:16.279315 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 01:18:16.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.294983 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:18:16.295430 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:18:16.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.314098 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:18:16.314758 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:18:16.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.322917 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:18:16.323291 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:18:16.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.334128 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 01:18:16.334708 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 01:18:16.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.345272 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:18:16.345811 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:18:16.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.360329 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:18:16.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.372764 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:18:16.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.386315 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 01:18:16.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.401830 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 28 01:18:16.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.432297 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:18:16.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.490448 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 01:18:16.509948 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 28 01:18:16.532048 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 01:18:16.555828 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 01:18:16.571764 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 01:18:16.571825 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:18:16.594083 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 28 01:18:16.609113 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:18:16.609660 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 01:18:16.634704 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 01:18:16.657931 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 01:18:16.672758 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:18:16.675963 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 01:18:16.684865 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:18:16.695891 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:18:16.723880 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 01:18:16.731160 systemd-journald[1238]: Time spent on flushing to /var/log/journal/4b228846b6c5498a92b14fe4f303af76 is 171.370ms for 1226 entries. Jan 28 01:18:16.731160 systemd-journald[1238]: System Journal (/var/log/journal/4b228846b6c5498a92b14fe4f303af76) is 8M, max 163.5M, 155.5M free. Jan 28 01:18:16.945782 systemd-journald[1238]: Received client request to flush runtime journal. Jan 28 01:18:16.766945 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 01:18:16.800728 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 01:18:16.835415 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 01:18:17.009727 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 01:18:17.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:17.044187 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 01:18:17.069624 kernel: loop1: detected capacity change from 0 to 111560 Jan 28 01:18:17.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:17.079764 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:18:17.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:17.090806 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 01:18:17.120519 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 28 01:18:17.327057 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 01:18:17.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:17.356977 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 01:18:17.359000 audit: BPF prog-id=18 op=LOAD Jan 28 01:18:17.359000 audit: BPF prog-id=19 op=LOAD Jan 28 01:18:17.360000 audit: BPF prog-id=20 op=LOAD Jan 28 01:18:17.368690 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 28 01:18:17.407000 audit: BPF prog-id=21 op=LOAD Jan 28 01:18:17.411074 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:18:17.438021 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:18:17.451616 kernel: loop2: detected capacity change from 0 to 50784 Jan 28 01:18:17.461146 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 28 01:18:17.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:17.492000 audit: BPF prog-id=22 op=LOAD Jan 28 01:18:17.497000 audit: BPF prog-id=23 op=LOAD Jan 28 01:18:17.497000 audit: BPF prog-id=24 op=LOAD Jan 28 01:18:17.501095 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 28 01:18:17.538000 audit: BPF prog-id=25 op=LOAD Jan 28 01:18:17.538000 audit: BPF prog-id=26 op=LOAD Jan 28 01:18:17.538000 audit: BPF prog-id=27 op=LOAD Jan 28 01:18:17.550861 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 01:18:17.857107 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Jan 28 01:18:17.857180 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Jan 28 01:18:17.915762 kernel: loop3: detected capacity change from 0 to 229808 Jan 28 01:18:17.916887 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:18:17.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:17.977650 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 01:18:17.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:18.088915 systemd-nsresourced[1292]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 28 01:18:18.094220 kernel: loop4: detected capacity change from 0 to 111560 Jan 28 01:18:18.103127 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 28 01:18:18.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:18.181726 kernel: loop5: detected capacity change from 0 to 50784 Jan 28 01:18:18.206729 kernel: loop6: detected capacity change from 0 to 229808 Jan 28 01:18:18.244702 (sd-merge)[1304]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 28 01:18:18.359503 (sd-merge)[1304]: Merged extensions into '/usr'. Jan 28 01:18:18.498166 systemd[1]: Reload requested from client PID 1273 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 01:18:18.499650 systemd[1]: Reloading... Jan 28 01:18:18.578148 systemd-oomd[1288]: No swap; memory pressure usage will be degraded Jan 28 01:18:18.773165 systemd-resolved[1289]: Positive Trust Anchors: Jan 28 01:18:18.773246 systemd-resolved[1289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:18:18.773253 systemd-resolved[1289]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 28 01:18:18.773302 systemd-resolved[1289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:18:18.928654 systemd-resolved[1289]: Defaulting to hostname 'linux'. Jan 28 01:18:19.025855 zram_generator::config[1343]: No configuration found. Jan 28 01:18:20.025934 systemd[1]: Reloading finished in 1523 ms. Jan 28 01:18:20.192968 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 28 01:18:20.326672 kernel: kauditd_printk_skb: 49 callbacks suppressed Jan 28 01:18:20.326847 kernel: audit: type=1130 audit(1769563100.239:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:20.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:20.246763 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:18:20.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:20.539302 kernel: audit: type=1130 audit(1769563100.405:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:20.560063 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 01:18:20.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:20.737165 kernel: audit: type=1130 audit(1769563100.603:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:20.745774 kernel: audit: type=1130 audit(1769563100.738:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:20.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:20.606827 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 01:18:20.780469 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:18:20.849143 systemd[1]: Starting ensure-sysext.service... Jan 28 01:18:20.872255 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:18:20.916000 audit: BPF prog-id=8 op=UNLOAD Jan 28 01:18:20.916000 audit: BPF prog-id=7 op=UNLOAD Jan 28 01:18:20.946068 kernel: audit: type=1334 audit(1769563100.916:150): prog-id=8 op=UNLOAD Jan 28 01:18:20.946132 kernel: audit: type=1334 audit(1769563100.916:151): prog-id=7 op=UNLOAD Jan 28 01:18:20.964111 kernel: audit: type=1334 audit(1769563100.946:152): prog-id=28 op=LOAD Jan 28 01:18:20.946000 audit: BPF prog-id=28 op=LOAD Jan 28 01:18:20.957880 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:18:20.946000 audit: BPF prog-id=29 op=LOAD Jan 28 01:18:20.964628 kernel: audit: type=1334 audit(1769563100.946:153): prog-id=29 op=LOAD Jan 28 01:18:21.016000 audit: BPF prog-id=30 op=LOAD Jan 28 01:18:21.048767 kernel: audit: type=1334 audit(1769563101.016:154): prog-id=30 op=LOAD Jan 28 01:18:21.016000 audit: BPF prog-id=15 op=UNLOAD Jan 28 01:18:21.067973 kernel: audit: type=1334 audit(1769563101.016:155): prog-id=15 op=UNLOAD Jan 28 01:18:21.052321 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 28 01:18:21.016000 audit: BPF prog-id=31 op=LOAD Jan 28 01:18:21.016000 audit: BPF prog-id=32 op=LOAD Jan 28 01:18:21.016000 audit: BPF prog-id=16 op=UNLOAD Jan 28 01:18:21.016000 audit: BPF prog-id=17 op=UNLOAD Jan 28 01:18:21.035000 audit: BPF prog-id=33 op=LOAD Jan 28 01:18:21.035000 audit: BPF prog-id=21 op=UNLOAD Jan 28 01:18:21.037000 audit: BPF prog-id=34 op=LOAD Jan 28 01:18:21.037000 audit: BPF prog-id=25 op=UNLOAD Jan 28 01:18:21.037000 audit: BPF prog-id=35 op=LOAD Jan 28 01:18:21.037000 audit: BPF prog-id=36 op=LOAD Jan 28 01:18:21.037000 audit: BPF prog-id=26 op=UNLOAD Jan 28 01:18:21.037000 audit: BPF prog-id=27 op=UNLOAD Jan 28 01:18:21.037000 audit: BPF prog-id=37 op=LOAD Jan 28 01:18:21.037000 audit: BPF prog-id=22 op=UNLOAD Jan 28 01:18:21.037000 audit: BPF prog-id=38 op=LOAD Jan 28 01:18:21.037000 audit: BPF prog-id=39 op=LOAD Jan 28 01:18:21.037000 audit: BPF prog-id=23 op=UNLOAD Jan 28 01:18:21.037000 audit: BPF prog-id=24 op=UNLOAD Jan 28 01:18:21.037000 audit: BPF prog-id=40 op=LOAD Jan 28 01:18:21.037000 audit: BPF prog-id=18 op=UNLOAD Jan 28 01:18:21.037000 audit: BPF prog-id=41 op=LOAD Jan 28 01:18:21.037000 audit: BPF prog-id=42 op=LOAD Jan 28 01:18:21.037000 audit: BPF prog-id=19 op=UNLOAD Jan 28 01:18:21.037000 audit: BPF prog-id=20 op=UNLOAD Jan 28 01:18:21.052432 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 28 01:18:21.054841 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 01:18:21.064485 systemd[1]: Reload requested from client PID 1380 ('systemctl') (unit ensure-sysext.service)... Jan 28 01:18:21.064504 systemd[1]: Reloading... Jan 28 01:18:21.065000 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Jan 28 01:18:21.065120 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Jan 28 01:18:21.140131 systemd-tmpfiles[1381]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:18:21.140194 systemd-tmpfiles[1381]: Skipping /boot Jan 28 01:18:21.163103 systemd-tmpfiles[1381]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:18:21.163176 systemd-tmpfiles[1381]: Skipping /boot Jan 28 01:18:21.201070 systemd-udevd[1382]: Using default interface naming scheme 'v257'. Jan 28 01:18:21.346696 zram_generator::config[1414]: No configuration found. Jan 28 01:18:21.866639 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 01:18:22.029636 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 28 01:18:22.040702 kernel: ACPI: button: Power Button [PWRF] Jan 28 01:18:22.139776 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 28 01:18:22.140717 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 01:18:22.155683 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 28 01:18:22.156188 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 28 01:18:22.194632 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 28 01:18:22.222786 systemd[1]: Reloading finished in 1157 ms. Jan 28 01:18:22.279463 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:18:22.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:22.338000 audit: BPF prog-id=43 op=LOAD Jan 28 01:18:22.338000 audit: BPF prog-id=37 op=UNLOAD Jan 28 01:18:22.338000 audit: BPF prog-id=44 op=LOAD Jan 28 01:18:22.338000 audit: BPF prog-id=45 op=LOAD Jan 28 01:18:22.338000 audit: BPF prog-id=38 op=UNLOAD Jan 28 01:18:22.338000 audit: BPF prog-id=39 op=UNLOAD Jan 28 01:18:22.338000 audit: BPF prog-id=46 op=LOAD Jan 28 01:18:22.338000 audit: BPF prog-id=33 op=UNLOAD Jan 28 01:18:22.338000 audit: BPF prog-id=47 op=LOAD Jan 28 01:18:22.338000 audit: BPF prog-id=34 op=UNLOAD Jan 28 01:18:22.338000 audit: BPF prog-id=48 op=LOAD Jan 28 01:18:22.338000 audit: BPF prog-id=49 op=LOAD Jan 28 01:18:22.338000 audit: BPF prog-id=35 op=UNLOAD Jan 28 01:18:22.338000 audit: BPF prog-id=36 op=UNLOAD Jan 28 01:18:22.345000 audit: BPF prog-id=50 op=LOAD Jan 28 01:18:22.345000 audit: BPF prog-id=30 op=UNLOAD Jan 28 01:18:22.345000 audit: BPF prog-id=51 op=LOAD Jan 28 01:18:22.345000 audit: BPF prog-id=52 op=LOAD Jan 28 01:18:22.345000 audit: BPF prog-id=31 op=UNLOAD Jan 28 01:18:22.345000 audit: BPF prog-id=32 op=UNLOAD Jan 28 01:18:22.364000 audit: BPF prog-id=53 op=LOAD Jan 28 01:18:22.364000 audit: BPF prog-id=54 op=LOAD Jan 28 01:18:22.364000 audit: BPF prog-id=28 op=UNLOAD Jan 28 01:18:22.364000 audit: BPF prog-id=29 op=UNLOAD Jan 28 01:18:22.365000 audit: BPF prog-id=55 op=LOAD Jan 28 01:18:22.365000 audit: BPF prog-id=40 op=UNLOAD Jan 28 01:18:22.365000 audit: BPF prog-id=56 op=LOAD Jan 28 01:18:22.365000 audit: BPF prog-id=57 op=LOAD Jan 28 01:18:22.365000 audit: BPF prog-id=41 op=UNLOAD Jan 28 01:18:22.365000 audit: BPF prog-id=42 op=UNLOAD Jan 28 01:18:22.412915 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:18:22.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:22.564107 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:18:22.569314 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 01:18:22.585216 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 01:18:22.592239 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:18:22.619906 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:18:22.701645 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:18:22.768893 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:18:22.798048 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:18:22.802004 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 01:18:22.867883 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 01:18:23.029228 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 01:18:23.052320 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 01:18:23.096625 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 01:18:23.144000 audit: BPF prog-id=58 op=LOAD Jan 28 01:18:23.168280 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:18:23.269482 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 01:18:23.292205 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:18:23.341941 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:18:23.583248 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:18:23.589340 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:18:23.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:23.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:23.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:23.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:23.632229 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:18:23.634877 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:18:23.649648 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:18:23.658004 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:18:23.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:23.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:23.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:23.741178 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 01:18:23.752156 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 01:18:23.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:23.950064 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:18:23.951818 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:18:23.980482 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:18:23.991000 audit[1504]: SYSTEM_BOOT pid=1504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 28 01:18:24.149519 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:18:24.194679 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:18:24.213509 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:18:24.223205 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:18:24.223664 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 01:18:24.223821 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 01:18:24.224322 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:18:24.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:24.238954 systemd[1]: Finished ensure-sysext.service. Jan 28 01:18:24.260160 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:18:24.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:24.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:24.267114 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:18:24.333109 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:18:24.339806 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:18:24.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:24.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:24.363358 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:18:24.370202 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:18:24.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:24.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:24.426914 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:18:24.429195 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:18:24.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:24.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:24.458813 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:18:24.462081 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:18:24.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:24.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:24.510819 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:18:24.511110 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:18:24.526000 audit: BPF prog-id=59 op=LOAD Jan 28 01:18:24.534028 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 28 01:18:24.606174 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:18:24.643870 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 01:18:24.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:24.782706 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 01:18:24.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:24.884898 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 01:18:24.988000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 28 01:18:24.988000 audit[1553]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe3c39a480 a2=420 a3=0 items=0 ppid=1495 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:24.988000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 01:18:24.996770 augenrules[1553]: No rules Jan 28 01:18:24.997883 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 01:18:24.998378 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 01:18:25.028990 systemd-networkd[1503]: lo: Link UP Jan 28 01:18:25.029004 systemd-networkd[1503]: lo: Gained carrier Jan 28 01:18:25.037769 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:18:25.038121 systemd[1]: Reached target network.target - Network. Jan 28 01:18:25.044891 systemd-networkd[1503]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:18:25.044898 systemd-networkd[1503]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:18:25.058925 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 28 01:18:25.072845 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 01:18:25.077747 systemd-networkd[1503]: eth0: Link UP Jan 28 01:18:25.078749 systemd-networkd[1503]: eth0: Gained carrier Jan 28 01:18:25.078778 systemd-networkd[1503]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:18:25.203973 systemd-networkd[1503]: eth0: DHCPv4 address 10.0.0.61/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 01:18:25.389375 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 28 01:18:25.531240 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 28 01:18:25.557603 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 01:18:27.532751 systemd-resolved[1289]: Clock change detected. Flushing caches. Jan 28 01:18:27.629737 systemd-timesyncd[1541]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 28 01:18:27.633806 systemd-timesyncd[1541]: Initial clock synchronization to Wed 2026-01-28 01:18:27.532348 UTC. Jan 28 01:18:27.705271 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:18:28.477720 systemd-networkd[1503]: eth0: Gained IPv6LL Jan 28 01:18:28.529317 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 01:18:28.596180 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 01:18:31.066710 kernel: kvm_amd: TSC scaling supported Jan 28 01:18:31.069846 kernel: kvm_amd: Nested Virtualization enabled Jan 28 01:18:31.071155 kernel: kvm_amd: Nested Paging enabled Jan 28 01:18:31.071237 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 28 01:18:31.078149 kernel: kvm_amd: PMU virtualization is disabled Jan 28 01:18:31.294414 ldconfig[1500]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 01:18:31.318606 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 01:18:31.350376 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 01:18:31.572570 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 01:18:31.612670 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:18:31.630302 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 01:18:31.649390 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 01:18:31.669647 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 28 01:18:31.694787 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 01:18:31.716653 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 01:18:31.742187 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 28 01:18:31.766572 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 28 01:18:31.983733 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 01:18:32.127412 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 01:18:32.150373 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:18:32.234660 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:18:32.582665 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 01:18:32.601709 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 01:18:32.630186 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 28 01:18:32.669941 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 28 01:18:32.682305 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 28 01:18:32.788415 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 01:18:32.805474 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 28 01:18:32.820700 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 01:18:32.830591 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:18:32.840582 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:18:32.929559 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:18:32.965493 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:18:33.149664 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 01:18:33.188101 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 28 01:18:33.236451 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 01:18:33.316426 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 01:18:33.393725 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 01:18:33.420274 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 01:18:33.434217 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 01:18:33.454449 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 28 01:18:33.512380 kernel: EDAC MC: Ver: 3.0.0 Jan 28 01:18:33.497092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:18:33.512702 jq[1575]: false Jan 28 01:18:33.514731 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 01:18:33.565749 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 01:18:33.582317 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 01:18:33.600861 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 01:18:33.619368 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 01:18:33.667485 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Refreshing passwd entry cache Jan 28 01:18:33.668753 oslogin_cache_refresh[1577]: Refreshing passwd entry cache Jan 28 01:18:33.678378 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 01:18:33.692352 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 01:18:33.704759 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 01:18:33.750159 oslogin_cache_refresh[1577]: Failure getting users, quitting Jan 28 01:18:33.761656 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Failure getting users, quitting Jan 28 01:18:33.761656 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 01:18:33.761656 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Refreshing group entry cache Jan 28 01:18:33.759303 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 01:18:33.750331 oslogin_cache_refresh[1577]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 01:18:33.750629 oslogin_cache_refresh[1577]: Refreshing group entry cache Jan 28 01:18:33.788535 extend-filesystems[1576]: Found /dev/vda6 Jan 28 01:18:33.857347 extend-filesystems[1576]: Found /dev/vda9 Jan 28 01:18:33.857347 extend-filesystems[1576]: Checking size of /dev/vda9 Jan 28 01:18:33.827191 oslogin_cache_refresh[1577]: Failure getting groups, quitting Jan 28 01:18:33.956228 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Failure getting groups, quitting Jan 28 01:18:33.956228 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 01:18:33.898536 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 01:18:33.956542 extend-filesystems[1576]: Resized partition /dev/vda9 Jan 28 01:18:34.007250 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 28 01:18:33.827211 oslogin_cache_refresh[1577]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 01:18:34.007965 extend-filesystems[1611]: resize2fs 1.47.3 (8-Jul-2025) Jan 28 01:18:34.027621 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 01:18:34.034201 jq[1605]: true Jan 28 01:18:34.067677 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 01:18:34.070254 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 01:18:34.223300 update_engine[1597]: I20260128 01:18:34.146579 1597 main.cc:92] Flatcar Update Engine starting Jan 28 01:18:34.071175 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 28 01:18:34.071676 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 28 01:18:34.104241 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 01:18:34.107747 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 01:18:34.138438 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 01:18:34.231607 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 01:18:34.232340 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 01:18:34.306714 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 28 01:18:34.420798 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 28 01:18:34.421468 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 28 01:18:34.425284 extend-filesystems[1611]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 28 01:18:34.425284 extend-filesystems[1611]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 28 01:18:34.425284 extend-filesystems[1611]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 28 01:18:34.468702 extend-filesystems[1576]: Resized filesystem in /dev/vda9 Jan 28 01:18:34.431575 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 01:18:34.475795 tar[1621]: linux-amd64/LICENSE Jan 28 01:18:34.475795 tar[1621]: linux-amd64/helm Jan 28 01:18:34.436435 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 01:18:34.436855 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 01:18:34.518557 jq[1623]: true Jan 28 01:18:34.993362 dbus-daemon[1573]: [system] SELinux support is enabled Jan 28 01:18:34.994266 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 01:18:35.007161 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 01:18:35.007203 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 01:18:35.016381 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 01:18:35.016416 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 01:18:35.096456 systemd[1]: Started update-engine.service - Update Engine. Jan 28 01:18:35.193228 update_engine[1597]: I20260128 01:18:35.103940 1597 update_check_scheduler.cc:74] Next update check in 2m17s Jan 28 01:18:35.162468 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 01:18:35.210504 systemd-logind[1590]: Watching system buttons on /dev/input/event2 (Power Button) Jan 28 01:18:35.210556 systemd-logind[1590]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 01:18:35.218338 systemd-logind[1590]: New seat seat0. Jan 28 01:18:35.224183 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 01:18:35.529199 bash[1661]: Updated "/home/core/.ssh/authorized_keys" Jan 28 01:18:35.555834 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 01:18:35.583855 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 28 01:18:35.675566 sshd_keygen[1612]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 01:18:36.595571 locksmithd[1654]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 01:18:36.597635 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 01:18:36.623880 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 01:18:36.756278 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 01:18:36.759729 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 01:18:36.868381 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 01:18:37.255103 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 01:18:37.290215 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 01:18:37.315468 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 28 01:18:37.372781 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 01:18:38.393370 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 01:18:38.445351 systemd[1]: Started sshd@0-10.0.0.61:22-10.0.0.1:49404.service - OpenSSH per-connection server daemon (10.0.0.1:49404). Jan 28 01:18:38.674889 containerd[1624]: time="2026-01-28T01:18:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 28 01:18:38.683705 containerd[1624]: time="2026-01-28T01:18:38.682551989Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 28 01:18:38.808438 containerd[1624]: time="2026-01-28T01:18:38.802958901Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="49.334µs" Jan 28 01:18:38.813477 containerd[1624]: time="2026-01-28T01:18:38.812624817Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 28 01:18:38.814207 containerd[1624]: time="2026-01-28T01:18:38.813975017Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 28 01:18:38.814383 containerd[1624]: time="2026-01-28T01:18:38.814359726Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 28 01:18:38.816444 containerd[1624]: time="2026-01-28T01:18:38.816418499Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 28 01:18:38.816530 containerd[1624]: time="2026-01-28T01:18:38.816510862Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 01:18:38.816772 containerd[1624]: time="2026-01-28T01:18:38.816745269Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 01:18:38.816849 containerd[1624]: time="2026-01-28T01:18:38.816829887Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 01:18:38.818453 containerd[1624]: time="2026-01-28T01:18:38.818426708Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 01:18:38.818614 containerd[1624]: time="2026-01-28T01:18:38.818593289Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 01:18:38.818789 containerd[1624]: time="2026-01-28T01:18:38.818763467Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 01:18:38.818854 containerd[1624]: time="2026-01-28T01:18:38.818839309Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 28 01:18:38.821423 containerd[1624]: time="2026-01-28T01:18:38.821397244Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 28 01:18:38.821573 containerd[1624]: time="2026-01-28T01:18:38.821554799Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 28 01:18:38.822175 containerd[1624]: time="2026-01-28T01:18:38.821977107Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 28 01:18:38.822824 containerd[1624]: time="2026-01-28T01:18:38.822800233Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 01:18:38.823387 containerd[1624]: time="2026-01-28T01:18:38.823363675Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 01:18:38.823636 containerd[1624]: time="2026-01-28T01:18:38.823514206Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 28 01:18:38.823992 containerd[1624]: time="2026-01-28T01:18:38.823968204Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 28 01:18:38.837539 containerd[1624]: time="2026-01-28T01:18:38.836683192Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 28 01:18:38.838687 containerd[1624]: time="2026-01-28T01:18:38.838658260Z" level=info msg="metadata content store policy set" policy=shared Jan 28 01:18:38.902630 containerd[1624]: time="2026-01-28T01:18:38.902566051Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 28 01:18:38.907162 containerd[1624]: time="2026-01-28T01:18:38.904618934Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 28 01:18:38.907162 containerd[1624]: time="2026-01-28T01:18:38.904766409Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 28 01:18:38.907162 containerd[1624]: time="2026-01-28T01:18:38.904799602Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 28 01:18:38.907162 containerd[1624]: time="2026-01-28T01:18:38.904825149Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 28 01:18:38.907162 containerd[1624]: time="2026-01-28T01:18:38.904957837Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 28 01:18:38.907162 containerd[1624]: time="2026-01-28T01:18:38.904982533Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 28 01:18:38.907162 containerd[1624]: time="2026-01-28T01:18:38.905100793Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 28 01:18:38.907162 containerd[1624]: time="2026-01-28T01:18:38.905124678Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 28 01:18:38.907162 containerd[1624]: time="2026-01-28T01:18:38.905143343Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 28 01:18:38.907162 containerd[1624]: time="2026-01-28T01:18:38.905170283Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 28 01:18:38.907162 containerd[1624]: time="2026-01-28T01:18:38.905186273Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 28 01:18:38.907162 containerd[1624]: time="2026-01-28T01:18:38.905201592Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 28 01:18:38.907162 containerd[1624]: time="2026-01-28T01:18:38.905220508Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 28 01:18:38.907834 containerd[1624]: time="2026-01-28T01:18:38.905415461Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 28 01:18:38.907834 containerd[1624]: time="2026-01-28T01:18:38.905667111Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 28 01:18:38.911303 containerd[1624]: time="2026-01-28T01:18:38.910299659Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 28 01:18:38.911303 containerd[1624]: time="2026-01-28T01:18:38.910346537Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 28 01:18:38.911303 containerd[1624]: time="2026-01-28T01:18:38.910367496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 28 01:18:38.911303 containerd[1624]: time="2026-01-28T01:18:38.910405558Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 28 01:18:38.911303 containerd[1624]: time="2026-01-28T01:18:38.910427689Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 28 01:18:38.911303 containerd[1624]: time="2026-01-28T01:18:38.910509863Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 28 01:18:38.911303 containerd[1624]: time="2026-01-28T01:18:38.910535740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 28 01:18:38.911303 containerd[1624]: time="2026-01-28T01:18:38.910550799Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 28 01:18:38.911303 containerd[1624]: time="2026-01-28T01:18:38.910567540Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 28 01:18:38.911303 containerd[1624]: time="2026-01-28T01:18:38.910606713Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 28 01:18:38.914289 containerd[1624]: time="2026-01-28T01:18:38.914255305Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 28 01:18:38.917733 containerd[1624]: time="2026-01-28T01:18:38.915341392Z" level=info msg="Start snapshots syncer" Jan 28 01:18:38.917733 containerd[1624]: time="2026-01-28T01:18:38.915447881Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 28 01:18:38.924421 containerd[1624]: time="2026-01-28T01:18:38.923667186Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 28 01:18:38.927498 containerd[1624]: time="2026-01-28T01:18:38.926789716Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 28 01:18:38.927739 containerd[1624]: time="2026-01-28T01:18:38.927711997Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 28 01:18:38.928370 containerd[1624]: time="2026-01-28T01:18:38.928344559Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 28 01:18:38.929049 containerd[1624]: time="2026-01-28T01:18:38.928658254Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 28 01:18:38.934390 containerd[1624]: time="2026-01-28T01:18:38.929704247Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 28 01:18:38.934390 containerd[1624]: time="2026-01-28T01:18:38.933593561Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 28 01:18:38.934390 containerd[1624]: time="2026-01-28T01:18:38.933617907Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 28 01:18:38.934390 containerd[1624]: time="2026-01-28T01:18:38.933635269Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 28 01:18:38.945277 containerd[1624]: time="2026-01-28T01:18:38.945235803Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 28 01:18:38.945345 containerd[1624]: time="2026-01-28T01:18:38.945286037Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 28 01:18:38.945345 containerd[1624]: time="2026-01-28T01:18:38.945306275Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 28 01:18:38.945418 containerd[1624]: time="2026-01-28T01:18:38.945382597Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 01:18:38.945450 containerd[1624]: time="2026-01-28T01:18:38.945413155Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 01:18:38.945450 containerd[1624]: time="2026-01-28T01:18:38.945430647Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 01:18:38.945450 containerd[1624]: time="2026-01-28T01:18:38.945444763Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 01:18:38.945556 containerd[1624]: time="2026-01-28T01:18:38.945455273Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 28 01:18:38.945556 containerd[1624]: time="2026-01-28T01:18:38.945468317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 28 01:18:38.945609 containerd[1624]: time="2026-01-28T01:18:38.945557965Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 28 01:18:38.945609 containerd[1624]: time="2026-01-28T01:18:38.945583492Z" level=info msg="runtime interface created" Jan 28 01:18:38.945609 containerd[1624]: time="2026-01-28T01:18:38.945590826Z" level=info msg="created NRI interface" Jan 28 01:18:38.945609 containerd[1624]: time="2026-01-28T01:18:38.945607306Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 28 01:18:38.945724 containerd[1624]: time="2026-01-28T01:18:38.945631732Z" level=info msg="Connect containerd service" Jan 28 01:18:38.949221 containerd[1624]: time="2026-01-28T01:18:38.945781782Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 01:18:38.951774 containerd[1624]: time="2026-01-28T01:18:38.951441218Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 01:18:40.078769 tar[1621]: linux-amd64/README.md Jan 28 01:18:40.477110 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 49404 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:18:40.492295 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:18:40.659720 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 01:18:40.689858 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 01:18:40.779554 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 01:18:40.873468 systemd-logind[1590]: New session 1 of user core. Jan 28 01:18:41.131786 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 01:18:41.169761 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 01:18:41.463342 containerd[1624]: time="2026-01-28T01:18:41.461322706Z" level=info msg="Start subscribing containerd event" Jan 28 01:18:41.472178 containerd[1624]: time="2026-01-28T01:18:41.462658511Z" level=info msg="Start recovering state" Jan 28 01:18:41.472178 containerd[1624]: time="2026-01-28T01:18:41.470304900Z" level=info msg="Start event monitor" Jan 28 01:18:41.472178 containerd[1624]: time="2026-01-28T01:18:41.470506367Z" level=info msg="Start cni network conf syncer for default" Jan 28 01:18:41.472178 containerd[1624]: time="2026-01-28T01:18:41.470529079Z" level=info msg="Start streaming server" Jan 28 01:18:41.472178 containerd[1624]: time="2026-01-28T01:18:41.470546572Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 28 01:18:41.472178 containerd[1624]: time="2026-01-28T01:18:41.470801788Z" level=info msg="runtime interface starting up..." Jan 28 01:18:41.472178 containerd[1624]: time="2026-01-28T01:18:41.470819461Z" level=info msg="starting plugins..." Jan 28 01:18:41.472178 containerd[1624]: time="2026-01-28T01:18:41.470844518Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 28 01:18:41.476285 containerd[1624]: time="2026-01-28T01:18:41.474157929Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 01:18:41.476285 containerd[1624]: time="2026-01-28T01:18:41.474308701Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 01:18:41.510187 containerd[1624]: time="2026-01-28T01:18:41.491961878Z" level=info msg="containerd successfully booted in 2.820932s" Jan 28 01:18:41.495509 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 01:18:41.555829 (systemd)[1714]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:18:41.584868 systemd-logind[1590]: New session 2 of user core. Jan 28 01:18:46.732552 systemd[1714]: Queued start job for default target default.target. Jan 28 01:18:46.784762 systemd[1714]: Created slice app.slice - User Application Slice. Jan 28 01:18:46.786221 systemd[1714]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 28 01:18:46.789123 systemd[1714]: Reached target paths.target - Paths. Jan 28 01:18:46.789219 systemd[1714]: Reached target timers.target - Timers. Jan 28 01:18:46.807331 systemd[1714]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 01:18:46.810355 systemd[1714]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 28 01:18:47.890842 systemd[1714]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 28 01:18:47.922568 systemd[1714]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 01:18:47.924770 systemd[1714]: Reached target sockets.target - Sockets. Jan 28 01:18:47.926728 systemd[1714]: Reached target basic.target - Basic System. Jan 28 01:18:47.926821 systemd[1714]: Reached target default.target - Main User Target. Jan 28 01:18:47.926872 systemd[1714]: Startup finished in 6.141s. Jan 28 01:18:47.931852 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 01:18:47.964789 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 01:18:49.055816 systemd[1]: Started sshd@1-10.0.0.61:22-10.0.0.1:53474.service - OpenSSH per-connection server daemon (10.0.0.1:53474). Jan 28 01:18:50.474247 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 53474 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:18:50.483228 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:18:50.550203 systemd-logind[1590]: New session 3 of user core. Jan 28 01:18:50.574301 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 01:18:50.979709 sshd[1736]: Connection closed by 10.0.0.1 port 53474 Jan 28 01:18:50.981454 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Jan 28 01:18:51.006889 systemd[1]: sshd@1-10.0.0.61:22-10.0.0.1:53474.service: Deactivated successfully. Jan 28 01:18:51.026701 systemd[1]: session-3.scope: Deactivated successfully. Jan 28 01:18:51.029532 systemd-logind[1590]: Session 3 logged out. Waiting for processes to exit. Jan 28 01:18:51.035419 systemd[1]: Started sshd@2-10.0.0.61:22-10.0.0.1:53484.service - OpenSSH per-connection server daemon (10.0.0.1:53484). Jan 28 01:18:51.051523 systemd-logind[1590]: Removed session 3. Jan 28 01:18:51.548748 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 53484 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:18:51.558337 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:18:51.605791 systemd-logind[1590]: New session 4 of user core. Jan 28 01:18:51.628826 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 01:18:52.431795 sshd[1746]: Connection closed by 10.0.0.1 port 53484 Jan 28 01:18:52.434374 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Jan 28 01:18:52.487409 systemd[1]: sshd@2-10.0.0.61:22-10.0.0.1:53484.service: Deactivated successfully. Jan 28 01:18:52.500161 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 01:18:52.513869 systemd-logind[1590]: Session 4 logged out. Waiting for processes to exit. Jan 28 01:18:52.520146 systemd-logind[1590]: Removed session 4. Jan 28 01:18:55.074640 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:18:55.212830 (kubelet)[1754]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:18:55.235592 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 01:18:55.262756 systemd[1]: Startup finished in 24.441s (kernel) + 43.029s (initrd) + 43.976s (userspace) = 1min 51.447s. Jan 28 01:19:02.654513 systemd[1]: Started sshd@3-10.0.0.61:22-10.0.0.1:34464.service - OpenSSH per-connection server daemon (10.0.0.1:34464). Jan 28 01:19:03.228687 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 34464 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:19:03.244292 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:03.299269 systemd-logind[1590]: New session 5 of user core. Jan 28 01:19:03.329548 kubelet[1754]: E0128 01:19:03.328513 1754 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:19:03.332445 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 01:19:03.347512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:19:03.347813 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:19:03.348940 systemd[1]: kubelet.service: Consumed 9.660s CPU time, 271.5M memory peak. Jan 28 01:19:03.580712 sshd[1770]: Connection closed by 10.0.0.1 port 34464 Jan 28 01:19:03.584613 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:03.639419 systemd[1]: Started sshd@4-10.0.0.61:22-10.0.0.1:34476.service - OpenSSH per-connection server daemon (10.0.0.1:34476). Jan 28 01:19:03.640683 systemd[1]: sshd@3-10.0.0.61:22-10.0.0.1:34464.service: Deactivated successfully. Jan 28 01:19:03.656835 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 01:19:03.661638 systemd-logind[1590]: Session 5 logged out. Waiting for processes to exit. Jan 28 01:19:03.682500 systemd-logind[1590]: Removed session 5. Jan 28 01:19:04.178567 sshd[1773]: Accepted publickey for core from 10.0.0.1 port 34476 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:19:04.180355 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:04.208633 systemd-logind[1590]: New session 6 of user core. Jan 28 01:19:04.241371 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 01:19:04.315974 sshd[1780]: Connection closed by 10.0.0.1 port 34476 Jan 28 01:19:04.321152 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:04.356371 systemd[1]: sshd@4-10.0.0.61:22-10.0.0.1:34476.service: Deactivated successfully. Jan 28 01:19:04.365677 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 01:19:04.375762 systemd-logind[1590]: Session 6 logged out. Waiting for processes to exit. Jan 28 01:19:04.399351 systemd[1]: Started sshd@5-10.0.0.61:22-10.0.0.1:34480.service - OpenSSH per-connection server daemon (10.0.0.1:34480). Jan 28 01:19:04.400747 systemd-logind[1590]: Removed session 6. Jan 28 01:19:05.029675 sshd[1786]: Accepted publickey for core from 10.0.0.1 port 34480 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:19:06.266424 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:06.623360 systemd-logind[1590]: New session 7 of user core. Jan 28 01:19:06.689767 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 01:19:09.096898 sshd[1790]: Connection closed by 10.0.0.1 port 34480 Jan 28 01:19:09.108130 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:09.160721 systemd[1]: sshd@5-10.0.0.61:22-10.0.0.1:34480.service: Deactivated successfully. Jan 28 01:19:09.175916 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 01:19:09.187170 systemd-logind[1590]: Session 7 logged out. Waiting for processes to exit. Jan 28 01:19:09.205589 systemd[1]: Started sshd@6-10.0.0.61:22-10.0.0.1:34488.service - OpenSSH per-connection server daemon (10.0.0.1:34488). Jan 28 01:19:09.210802 systemd-logind[1590]: Removed session 7. Jan 28 01:19:09.773977 sshd[1796]: Accepted publickey for core from 10.0.0.1 port 34488 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:19:09.786871 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:09.894261 systemd-logind[1590]: New session 8 of user core. Jan 28 01:19:09.971515 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 01:19:10.522642 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 01:19:10.523560 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:19:10.629865 sudo[1801]: pam_unix(sudo:session): session closed for user root Jan 28 01:19:10.655132 sshd[1800]: Connection closed by 10.0.0.1 port 34488 Jan 28 01:19:10.658973 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:10.710561 systemd[1]: Started sshd@7-10.0.0.61:22-10.0.0.1:34498.service - OpenSSH per-connection server daemon (10.0.0.1:34498). Jan 28 01:19:10.717951 systemd[1]: sshd@6-10.0.0.61:22-10.0.0.1:34488.service: Deactivated successfully. Jan 28 01:19:10.737802 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 01:19:10.772415 systemd-logind[1590]: Session 8 logged out. Waiting for processes to exit. Jan 28 01:19:10.788413 systemd-logind[1590]: Removed session 8. Jan 28 01:19:11.077256 sshd[1805]: Accepted publickey for core from 10.0.0.1 port 34498 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:19:11.083621 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:11.153514 systemd-logind[1590]: New session 9 of user core. Jan 28 01:19:11.171746 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 01:19:11.302811 sudo[1814]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 01:19:11.309904 sudo[1814]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:19:11.356518 sudo[1814]: pam_unix(sudo:session): session closed for user root Jan 28 01:19:11.417630 sudo[1813]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 28 01:19:11.419810 sudo[1813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:19:11.505736 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 01:19:12.101000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 28 01:19:12.108853 augenrules[1838]: No rules Jan 28 01:19:12.119759 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 01:19:12.120462 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 01:19:12.126153 kernel: kauditd_printk_skb: 83 callbacks suppressed Jan 28 01:19:12.126228 kernel: audit: type=1305 audit(1769563152.101:237): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 28 01:19:12.130748 sudo[1813]: pam_unix(sudo:session): session closed for user root Jan 28 01:19:12.147158 sshd[1812]: Connection closed by 10.0.0.1 port 34498 Jan 28 01:19:12.101000 audit[1838]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff92fd4850 a2=420 a3=0 items=0 ppid=1819 pid=1838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:12.152489 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:12.179969 kernel: audit: type=1300 audit(1769563152.101:237): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff92fd4850 a2=420 a3=0 items=0 ppid=1819 pid=1838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:12.180233 kernel: audit: type=1327 audit(1769563152.101:237): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 01:19:12.101000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 01:19:12.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:19:12.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:19:12.233215 kernel: audit: type=1130 audit(1769563152.121:238): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:19:12.248502 kernel: audit: type=1131 audit(1769563152.121:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:19:12.248708 kernel: audit: type=1106 audit(1769563152.129:240): pid=1813 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:19:12.129000 audit[1813]: USER_END pid=1813 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:19:12.130000 audit[1813]: CRED_DISP pid=1813 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:19:12.155000 audit[1805]: USER_END pid=1805 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:19:12.279510 kernel: audit: type=1104 audit(1769563152.130:241): pid=1813 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:19:12.279582 kernel: audit: type=1106 audit(1769563152.155:242): pid=1805 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:19:12.282832 systemd[1]: sshd@7-10.0.0.61:22-10.0.0.1:34498.service: Deactivated successfully. Jan 28 01:19:12.285683 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 01:19:12.289530 systemd-logind[1590]: Session 9 logged out. Waiting for processes to exit. Jan 28 01:19:12.300220 systemd[1]: Started sshd@8-10.0.0.61:22-10.0.0.1:34506.service - OpenSSH per-connection server daemon (10.0.0.1:34506). Jan 28 01:19:12.307135 systemd-logind[1590]: Removed session 9. Jan 28 01:19:12.155000 audit[1805]: CRED_DISP pid=1805 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:19:12.325152 kernel: audit: type=1104 audit(1769563152.155:243): pid=1805 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:19:12.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.61:22-10.0.0.1:34498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:19:12.388612 kernel: audit: type=1131 audit(1769563152.282:244): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.61:22-10.0.0.1:34498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:19:12.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.61:22-10.0.0.1:34506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:19:12.605000 audit[1847]: USER_ACCT pid=1847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:19:12.649292 sshd[1847]: Accepted publickey for core from 10.0.0.1 port 34506 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:19:12.664720 sshd-session[1847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:12.661000 audit[1847]: CRED_ACQ pid=1847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:19:12.661000 audit[1847]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff05b9a4d0 a2=3 a3=0 items=0 ppid=1 pid=1847 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:12.661000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:19:12.726319 systemd-logind[1590]: New session 10 of user core. Jan 28 01:19:12.750608 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 01:19:12.772000 audit[1847]: USER_START pid=1847 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:19:12.787000 audit[1851]: CRED_ACQ pid=1851 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:19:12.858000 audit[1852]: USER_ACCT pid=1852 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:19:12.859981 sudo[1852]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 01:19:12.859000 audit[1852]: CRED_REFR pid=1852 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:19:12.872000 audit[1852]: USER_START pid=1852 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:19:12.874506 sudo[1852]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:19:13.362145 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 01:19:13.390708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:19:18.673873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:19:18.694166 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 28 01:19:18.694773 kernel: audit: type=1130 audit(1769563158.673:254): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:19:18.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:19:18.970179 (kubelet)[1880]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:19:20.839655 update_engine[1597]: I20260128 01:19:20.829383 1597 update_attempter.cc:509] Updating boot flags... Jan 28 01:19:20.914540 kubelet[1880]: E0128 01:19:20.906114 1880 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:19:20.991403 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:19:21.008218 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:19:21.021553 systemd[1]: kubelet.service: Consumed 3.010s CPU time, 111.1M memory peak. Jan 28 01:19:21.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:19:21.155276 kernel: audit: type=1131 audit(1769563161.017:255): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:19:22.170374 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 01:19:22.253880 (dockerd)[1904]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 01:19:28.317328 dockerd[1904]: time="2026-01-28T01:19:28.314970518Z" level=info msg="Starting up" Jan 28 01:19:28.323310 dockerd[1904]: time="2026-01-28T01:19:28.323144946Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 28 01:19:28.783124 dockerd[1904]: time="2026-01-28T01:19:28.782574862Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 28 01:19:29.457434 dockerd[1904]: time="2026-01-28T01:19:29.455399738Z" level=info msg="Loading containers: start." Jan 28 01:19:29.579803 kernel: Initializing XFRM netlink socket Jan 28 01:19:30.149000 audit[1959]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:30.149000 audit[1959]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe035a2b90 a2=0 a3=0 items=0 ppid=1904 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:30.172375 kernel: audit: type=1325 audit(1769563170.149:256): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:30.172785 kernel: audit: type=1300 audit(1769563170.149:256): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe035a2b90 a2=0 a3=0 items=0 ppid=1904 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:30.246370 kernel: audit: type=1327 audit(1769563170.149:256): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 28 01:19:30.149000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 28 01:19:30.167000 audit[1961]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:30.265063 kernel: audit: type=1325 audit(1769563170.167:257): table=filter:3 family=2 entries=2 op=nft_register_chain pid=1961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:30.167000 audit[1961]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffc6e6dd50 a2=0 a3=0 items=0 ppid=1904 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:30.311624 kernel: audit: type=1300 audit(1769563170.167:257): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffc6e6dd50 a2=0 a3=0 items=0 ppid=1904 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:30.320244 kernel: audit: type=1327 audit(1769563170.167:257): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 28 01:19:30.321178 kernel: audit: type=1325 audit(1769563170.182:258): table=filter:4 family=2 entries=1 op=nft_register_chain pid=1963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:30.167000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 28 01:19:30.182000 audit[1963]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:30.182000 audit[1963]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde3c0cc30 a2=0 a3=0 items=0 ppid=1904 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:30.349891 kernel: audit: type=1300 audit(1769563170.182:258): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde3c0cc30 a2=0 a3=0 items=0 ppid=1904 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:30.352954 kernel: audit: type=1327 audit(1769563170.182:258): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 28 01:19:30.182000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 28 01:19:30.364583 kernel: audit: type=1325 audit(1769563170.196:259): table=filter:5 family=2 entries=1 op=nft_register_chain pid=1965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:30.196000 audit[1965]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:30.196000 audit[1965]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc53aab2e0 a2=0 a3=0 items=0 ppid=1904 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:30.196000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 28 01:19:30.209000 audit[1967]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1967 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:30.209000 audit[1967]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff510d8a70 a2=0 a3=0 items=0 ppid=1904 pid=1967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:30.209000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 28 01:19:30.293000 audit[1969]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1969 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:30.293000 audit[1969]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffeee23fd20 a2=0 a3=0 items=0 ppid=1904 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:30.293000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 01:19:30.353000 audit[1971]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1971 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:30.353000 audit[1971]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc8fc212c0 a2=0 a3=0 items=0 ppid=1904 pid=1971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:30.353000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 01:19:30.368000 audit[1973]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1973 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:30.368000 audit[1973]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fffae8540e0 a2=0 a3=0 items=0 ppid=1904 pid=1973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:30.368000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 28 01:19:30.507000 audit[1976]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:30.507000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffec9ab3830 a2=0 a3=0 items=0 ppid=1904 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:30.507000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 28 01:19:30.560000 audit[1978]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:30.560000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd3c1dc2d0 a2=0 a3=0 items=0 ppid=1904 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:30.560000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 28 01:19:30.578000 audit[1980]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1980 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:30.578000 audit[1980]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fffeadb9490 a2=0 a3=0 items=0 ppid=1904 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:30.578000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 28 01:19:30.592000 audit[1982]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1982 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:30.592000 audit[1982]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffcbc868300 a2=0 a3=0 items=0 ppid=1904 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:30.592000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 01:19:30.606000 audit[1984]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1984 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:30.606000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffcdce132d0 a2=0 a3=0 items=0 ppid=1904 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:30.606000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 28 01:19:31.033000 audit[2014]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:19:31.033000 audit[2014]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffc518b0d0 a2=0 a3=0 items=0 ppid=1904 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:31.033000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 28 01:19:31.059000 audit[2016]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:19:31.059000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffef1fe0830 a2=0 a3=0 items=0 ppid=1904 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:31.059000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 28 01:19:31.074000 audit[2018]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:19:31.074000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe72c489d0 a2=0 a3=0 items=0 ppid=1904 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:31.074000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 28 01:19:31.140834 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 01:19:31.144000 audit[2020]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:19:31.144000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdbaa5f870 a2=0 a3=0 items=0 ppid=1904 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:31.144000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 28 01:19:31.161000 audit[2023]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2023 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:19:31.161000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc3f478060 a2=0 a3=0 items=0 ppid=1904 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:31.161000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 28 01:19:31.157966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:19:31.184000 audit[2025]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:19:31.184000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc4340c920 a2=0 a3=0 items=0 ppid=1904 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:31.184000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 01:19:31.196000 audit[2027]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2027 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:19:31.196000 audit[2027]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd498cebb0 a2=0 a3=0 items=0 ppid=1904 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:31.196000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 01:19:31.199000 audit[2029]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2029 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:19:31.199000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffee2b22e70 a2=0 a3=0 items=0 ppid=1904 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:31.199000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 28 01:19:31.259000 audit[2031]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2031 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:19:31.259000 audit[2031]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffdbafc8e40 a2=0 a3=0 items=0 ppid=1904 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:31.259000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 28 01:19:31.285000 audit[2035]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2035 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:19:31.285000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff9554de70 a2=0 a3=0 items=0 ppid=1904 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:31.285000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 28 01:19:31.302000 audit[2037]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2037 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:19:31.302000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff01778850 a2=0 a3=0 items=0 ppid=1904 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:31.302000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 28 01:19:31.355000 audit[2039]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2039 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:19:31.355000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffcf3a62aa0 a2=0 a3=0 items=0 ppid=1904 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:31.355000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 01:19:31.388000 audit[2041]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2041 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:19:31.388000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd6d84eee0 a2=0 a3=0 items=0 ppid=1904 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:31.388000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 28 01:19:31.430000 audit[2046]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:31.430000 audit[2046]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc6d703610 a2=0 a3=0 items=0 ppid=1904 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:31.430000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 28 01:19:31.474000 audit[2048]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:31.474000 audit[2048]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff7c298030 a2=0 a3=0 items=0 ppid=1904 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:31.474000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 28 01:19:31.550000 audit[2050]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:31.550000 audit[2050]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffebc06c350 a2=0 a3=0 items=0 ppid=1904 pid=2050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:31.550000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 28 01:19:31.592000 audit[2052]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2052 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:19:31.592000 audit[2052]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffef74f3640 a2=0 a3=0 items=0 ppid=1904 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:31.592000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 28 01:19:31.694000 audit[2054]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2054 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:19:31.694000 audit[2054]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe757507b0 a2=0 a3=0 items=0 ppid=1904 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:31.694000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 28 01:19:31.704000 audit[2056]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2056 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:19:31.704000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffddc44b3b0 a2=0 a3=0 items=0 ppid=1904 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:31.704000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 28 01:19:31.988069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:19:31.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:19:32.019392 (kubelet)[2061]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:19:32.049000 audit[2067]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:32.049000 audit[2067]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fff07b95c00 a2=0 a3=0 items=0 ppid=1904 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:32.049000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 28 01:19:32.068000 audit[2074]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2074 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:32.068000 audit[2074]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe09ce2340 a2=0 a3=0 items=0 ppid=1904 pid=2074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:32.068000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 28 01:19:32.131000 audit[2083]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2083 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:32.131000 audit[2083]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffdd77c25f0 a2=0 a3=0 items=0 ppid=1904 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:32.131000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 28 01:19:32.204000 audit[2089]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:32.204000 audit[2089]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffddb7238b0 a2=0 a3=0 items=0 ppid=1904 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:32.204000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 28 01:19:32.216000 audit[2091]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:32.216000 audit[2091]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fff92a3dac0 a2=0 a3=0 items=0 ppid=1904 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:32.216000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 28 01:19:32.222200 kubelet[2061]: E0128 01:19:32.221112 2061 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:19:32.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:19:32.232616 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:19:32.232949 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:19:32.233609 systemd[1]: kubelet.service: Consumed 563ms CPU time, 110.3M memory peak. Jan 28 01:19:32.238000 audit[2093]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2093 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:32.238000 audit[2093]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdafd4cd60 a2=0 a3=0 items=0 ppid=1904 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:32.238000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 28 01:19:32.274000 audit[2096]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2096 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:32.274000 audit[2096]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffca00ceb30 a2=0 a3=0 items=0 ppid=1904 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:32.274000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 01:19:32.340000 audit[2098]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:19:32.340000 audit[2098]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe06854d60 a2=0 a3=0 items=0 ppid=1904 pid=2098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:32.340000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 28 01:19:32.410051 systemd-networkd[1503]: docker0: Link UP Jan 28 01:19:32.467434 dockerd[1904]: time="2026-01-28T01:19:32.465315355Z" level=info msg="Loading containers: done." Jan 28 01:19:33.884978 dockerd[1904]: time="2026-01-28T01:19:33.881495212Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 01:19:34.000188 dockerd[1904]: time="2026-01-28T01:19:33.886401928Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 28 01:19:34.000188 dockerd[1904]: time="2026-01-28T01:19:33.886548056Z" level=info msg="Initializing buildkit" Jan 28 01:19:33.905666 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3556496189-merged.mount: Deactivated successfully. Jan 28 01:19:35.699635 dockerd[1904]: time="2026-01-28T01:19:35.684583396Z" level=info msg="Completed buildkit initialization" Jan 28 01:19:36.175301 dockerd[1904]: time="2026-01-28T01:19:36.071707606Z" level=info msg="Daemon has completed initialization" Jan 28 01:19:36.182310 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 01:19:36.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:19:36.233223 kernel: kauditd_printk_skb: 112 callbacks suppressed Jan 28 01:19:36.253764 kernel: audit: type=1130 audit(1769563176.184:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:19:36.269264 dockerd[1904]: time="2026-01-28T01:19:36.260453077Z" level=info msg="API listen on /run/docker.sock" Jan 28 01:19:42.378172 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 01:19:42.398172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:19:42.531078 containerd[1624]: time="2026-01-28T01:19:42.530140823Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 28 01:19:45.470687 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:19:45.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:19:45.501977 kernel: audit: type=1130 audit(1769563185.472:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:19:45.698470 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:19:46.705144 kubelet[2151]: E0128 01:19:46.704776 2151 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:19:46.717875 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:19:46.719587 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:19:46.722341 systemd[1]: kubelet.service: Consumed 955ms CPU time, 108.7M memory peak. Jan 28 01:19:46.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:19:46.764977 kernel: audit: type=1131 audit(1769563186.721:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:19:48.305515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3756533679.mount: Deactivated successfully. Jan 28 01:19:56.983672 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 28 01:19:57.010886 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:19:58.660763 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:19:58.694622 kernel: audit: type=1130 audit(1769563198.660:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:19:58.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:19:58.774924 (kubelet)[2228]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:19:59.315749 kubelet[2228]: E0128 01:19:59.314648 2228 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:19:59.322892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:19:59.323209 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:19:59.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:19:59.348572 systemd[1]: kubelet.service: Consumed 908ms CPU time, 110.3M memory peak. Jan 28 01:19:59.371865 kernel: audit: type=1131 audit(1769563199.341:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:20:01.592120 containerd[1624]: time="2026-01-28T01:20:01.590393382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:01.597979 containerd[1624]: time="2026-01-28T01:20:01.594118642Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30103808" Jan 28 01:20:01.602861 containerd[1624]: time="2026-01-28T01:20:01.602547484Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:01.614698 containerd[1624]: time="2026-01-28T01:20:01.608453261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:01.614698 containerd[1624]: time="2026-01-28T01:20:01.610313514Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 19.07949126s" Jan 28 01:20:01.614698 containerd[1624]: time="2026-01-28T01:20:01.610507927Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 28 01:20:01.626706 containerd[1624]: time="2026-01-28T01:20:01.625941746Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 28 01:20:09.458773 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 28 01:20:09.574181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:20:15.264058 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:20:15.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:15.319690 kernel: audit: type=1130 audit(1769563215.269:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:15.328377 (kubelet)[2247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:20:16.981868 kubelet[2247]: E0128 01:20:16.980633 2247 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:20:17.012504 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:20:17.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:20:17.012838 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:20:17.014959 systemd[1]: kubelet.service: Consumed 1.298s CPU time, 110.4M memory peak. Jan 28 01:20:17.033189 kernel: audit: type=1131 audit(1769563217.013:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:20:19.427834 containerd[1624]: time="2026-01-28T01:20:19.427388897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:19.445167 containerd[1624]: time="2026-01-28T01:20:19.443561210Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 28 01:20:19.453748 containerd[1624]: time="2026-01-28T01:20:19.452799181Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:19.483364 containerd[1624]: time="2026-01-28T01:20:19.480395968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:19.485742 containerd[1624]: time="2026-01-28T01:20:19.485689197Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 17.859505923s" Jan 28 01:20:19.499666 containerd[1624]: time="2026-01-28T01:20:19.493426328Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 28 01:20:19.524847 containerd[1624]: time="2026-01-28T01:20:19.523717259Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 28 01:20:27.194107 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 28 01:20:27.302535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:20:29.512497 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:20:29.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:29.539602 kernel: audit: type=1130 audit(1769563229.514:305): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:29.557698 (kubelet)[2269]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:20:30.482531 kubelet[2269]: E0128 01:20:30.478543 2269 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:20:30.494587 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:20:30.495207 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:20:30.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:20:30.499608 systemd[1]: kubelet.service: Consumed 1.071s CPU time, 110.5M memory peak. Jan 28 01:20:30.527297 kernel: audit: type=1131 audit(1769563230.497:306): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:20:32.726814 containerd[1624]: time="2026-01-28T01:20:32.723429215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:32.750758 containerd[1624]: time="2026-01-28T01:20:32.729363380Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20151328" Jan 28 01:20:32.757694 containerd[1624]: time="2026-01-28T01:20:32.754125516Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:32.767789 containerd[1624]: time="2026-01-28T01:20:32.767190964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:32.780718 containerd[1624]: time="2026-01-28T01:20:32.774238973Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 13.250434352s" Jan 28 01:20:32.780718 containerd[1624]: time="2026-01-28T01:20:32.774286701Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 28 01:20:32.791742 containerd[1624]: time="2026-01-28T01:20:32.791508737Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 28 01:20:40.467338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3672132601.mount: Deactivated successfully. Jan 28 01:20:40.617838 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 28 01:20:40.639461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:20:42.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:42.394161 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:20:42.425731 kernel: audit: type=1130 audit(1769563242.393:307): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:42.471210 (kubelet)[2292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:20:43.527556 kubelet[2292]: E0128 01:20:43.524534 2292 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:20:43.661173 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:20:43.661584 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:20:43.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:20:43.669229 systemd[1]: kubelet.service: Consumed 1.334s CPU time, 108.3M memory peak. Jan 28 01:20:43.704154 kernel: audit: type=1131 audit(1769563243.668:308): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:20:48.832835 containerd[1624]: time="2026-01-28T01:20:48.829398474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:48.882476 containerd[1624]: time="2026-01-28T01:20:48.866289084Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31927016" Jan 28 01:20:48.882476 containerd[1624]: time="2026-01-28T01:20:48.873982294Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:48.895448 containerd[1624]: time="2026-01-28T01:20:48.886835362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:48.895448 containerd[1624]: time="2026-01-28T01:20:48.888244751Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 16.096632563s" Jan 28 01:20:48.895448 containerd[1624]: time="2026-01-28T01:20:48.889150988Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 28 01:20:48.906341 containerd[1624]: time="2026-01-28T01:20:48.905832572Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 28 01:20:52.911776 update_engine[1597]: I20260128 01:20:52.879724 1597 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 28 01:20:52.911776 update_engine[1597]: I20260128 01:20:52.911756 1597 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 28 01:20:52.971909 update_engine[1597]: I20260128 01:20:52.965515 1597 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 28 01:20:52.982284 update_engine[1597]: I20260128 01:20:52.978567 1597 omaha_request_params.cc:62] Current group set to beta Jan 28 01:20:52.982284 update_engine[1597]: I20260128 01:20:52.979666 1597 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 28 01:20:52.982284 update_engine[1597]: I20260128 01:20:52.979688 1597 update_attempter.cc:643] Scheduling an action processor start. Jan 28 01:20:52.982284 update_engine[1597]: I20260128 01:20:52.979714 1597 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 01:20:52.982284 update_engine[1597]: I20260128 01:20:52.980297 1597 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 28 01:20:52.982284 update_engine[1597]: I20260128 01:20:52.981662 1597 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 01:20:52.982284 update_engine[1597]: I20260128 01:20:52.981685 1597 omaha_request_action.cc:272] Request: Jan 28 01:20:52.982284 update_engine[1597]: Jan 28 01:20:52.982284 update_engine[1597]: Jan 28 01:20:52.982284 update_engine[1597]: Jan 28 01:20:52.982284 update_engine[1597]: Jan 28 01:20:52.982284 update_engine[1597]: Jan 28 01:20:52.982284 update_engine[1597]: Jan 28 01:20:52.982284 update_engine[1597]: Jan 28 01:20:52.982284 update_engine[1597]: Jan 28 01:20:52.982284 update_engine[1597]: I20260128 01:20:52.981786 1597 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:20:53.423959 locksmithd[1654]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 28 01:20:53.470429 update_engine[1597]: I20260128 01:20:53.467417 1597 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:20:53.668423 update_engine[1597]: I20260128 01:20:53.573292 1597 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:20:53.697964 update_engine[1597]: E20260128 01:20:53.674192 1597 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 01:20:53.697964 update_engine[1597]: I20260128 01:20:53.674729 1597 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 28 01:20:53.971484 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 28 01:20:54.079912 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:20:56.516897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2834621436.mount: Deactivated successfully. Jan 28 01:20:57.224681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:20:57.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:57.289660 kernel: audit: type=1130 audit(1769563257.224:309): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:57.373920 (kubelet)[2318]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:20:58.323375 kubelet[2318]: E0128 01:20:58.306503 2318 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:20:58.388804 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:20:58.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:20:58.415627 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:20:58.426892 systemd[1]: kubelet.service: Consumed 1.524s CPU time, 109M memory peak. Jan 28 01:20:58.478913 kernel: audit: type=1131 audit(1769563258.421:310): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:21:03.821203 update_engine[1597]: I20260128 01:21:03.818475 1597 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:21:03.828665 update_engine[1597]: I20260128 01:21:03.823784 1597 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:21:03.828665 update_engine[1597]: I20260128 01:21:03.825222 1597 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:21:03.850174 update_engine[1597]: E20260128 01:21:03.849418 1597 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 01:21:03.850174 update_engine[1597]: I20260128 01:21:03.849589 1597 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 28 01:21:08.794258 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 28 01:21:08.886582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:21:10.218528 containerd[1624]: time="2026-01-28T01:21:10.215917658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:10.233323 containerd[1624]: time="2026-01-28T01:21:10.221490466Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20931441" Jan 28 01:21:10.289539 containerd[1624]: time="2026-01-28T01:21:10.288550777Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:10.312973 containerd[1624]: time="2026-01-28T01:21:10.312157797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:10.317292 containerd[1624]: time="2026-01-28T01:21:10.317121906Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 21.410771501s" Jan 28 01:21:10.317292 containerd[1624]: time="2026-01-28T01:21:10.317248582Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 28 01:21:10.326699 containerd[1624]: time="2026-01-28T01:21:10.325981290Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 28 01:21:10.947898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:21:10.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:21:10.977429 kernel: audit: type=1130 audit(1769563270.950:311): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:21:11.014957 (kubelet)[2380]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:21:11.356668 kubelet[2380]: E0128 01:21:11.356355 2380 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:21:11.378711 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:21:11.379293 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:21:11.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:21:11.380195 systemd[1]: kubelet.service: Consumed 1.473s CPU time, 109.1M memory peak. Jan 28 01:21:11.399213 kernel: audit: type=1131 audit(1769563271.378:312): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:21:11.550904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3547099635.mount: Deactivated successfully. Jan 28 01:21:11.630171 containerd[1624]: time="2026-01-28T01:21:11.628454959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:21:11.633992 containerd[1624]: time="2026-01-28T01:21:11.633639378Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 28 01:21:11.641257 containerd[1624]: time="2026-01-28T01:21:11.639386865Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:21:11.646390 containerd[1624]: time="2026-01-28T01:21:11.646168592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:21:11.652903 containerd[1624]: time="2026-01-28T01:21:11.648626882Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.322467792s" Jan 28 01:21:11.652903 containerd[1624]: time="2026-01-28T01:21:11.648717381Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 28 01:21:11.653719 containerd[1624]: time="2026-01-28T01:21:11.653499662Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 28 01:21:12.921670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2815135932.mount: Deactivated successfully. Jan 28 01:21:13.826113 update_engine[1597]: I20260128 01:21:13.825709 1597 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:21:13.826668 update_engine[1597]: I20260128 01:21:13.826256 1597 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:21:13.827330 update_engine[1597]: I20260128 01:21:13.827229 1597 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:21:13.853735 update_engine[1597]: E20260128 01:21:13.852721 1597 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 01:21:13.853735 update_engine[1597]: I20260128 01:21:13.853626 1597 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 28 01:21:21.624497 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 28 01:21:21.674861 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:21:23.967633 update_engine[1597]: I20260128 01:21:23.869422 1597 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:21:23.967633 update_engine[1597]: I20260128 01:21:23.993327 1597 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:21:24.200661 update_engine[1597]: I20260128 01:21:24.194877 1597 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:21:24.215129 update_engine[1597]: E20260128 01:21:24.214358 1597 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 01:21:24.215129 update_engine[1597]: I20260128 01:21:24.214640 1597 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 28 01:21:24.215129 update_engine[1597]: I20260128 01:21:24.214660 1597 omaha_request_action.cc:617] Omaha request response: Jan 28 01:21:24.215512 update_engine[1597]: E20260128 01:21:24.215412 1597 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 28 01:21:24.220924 update_engine[1597]: I20260128 01:21:24.218631 1597 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 28 01:21:24.221847 update_engine[1597]: I20260128 01:21:24.221748 1597 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 01:21:24.221965 update_engine[1597]: I20260128 01:21:24.221938 1597 update_attempter.cc:306] Processing Done. Jan 28 01:21:24.222440 update_engine[1597]: E20260128 01:21:24.222409 1597 update_attempter.cc:619] Update failed. Jan 28 01:21:24.222601 update_engine[1597]: I20260128 01:21:24.222569 1597 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 28 01:21:24.222691 update_engine[1597]: I20260128 01:21:24.222666 1597 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 28 01:21:24.225298 update_engine[1597]: I20260128 01:21:24.222746 1597 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 28 01:21:24.274344 update_engine[1597]: I20260128 01:21:24.225488 1597 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 01:21:24.274344 update_engine[1597]: I20260128 01:21:24.226955 1597 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 01:21:24.274344 update_engine[1597]: I20260128 01:21:24.226974 1597 omaha_request_action.cc:272] Request: Jan 28 01:21:24.274344 update_engine[1597]: Jan 28 01:21:24.274344 update_engine[1597]: Jan 28 01:21:24.274344 update_engine[1597]: Jan 28 01:21:24.274344 update_engine[1597]: Jan 28 01:21:24.274344 update_engine[1597]: Jan 28 01:21:24.274344 update_engine[1597]: Jan 28 01:21:24.274344 update_engine[1597]: I20260128 01:21:24.226985 1597 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:21:24.420482 update_engine[1597]: I20260128 01:21:24.323899 1597 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:21:24.595595 locksmithd[1654]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 28 01:21:24.595595 locksmithd[1654]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 28 01:21:24.678627 update_engine[1597]: I20260128 01:21:24.482618 1597 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:21:24.678627 update_engine[1597]: E20260128 01:21:24.514878 1597 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 01:21:24.678627 update_engine[1597]: I20260128 01:21:24.515726 1597 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 28 01:21:24.678627 update_engine[1597]: I20260128 01:21:24.515755 1597 omaha_request_action.cc:617] Omaha request response: Jan 28 01:21:24.678627 update_engine[1597]: I20260128 01:21:24.524133 1597 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 01:21:24.678627 update_engine[1597]: I20260128 01:21:24.565135 1597 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 01:21:24.678627 update_engine[1597]: I20260128 01:21:24.565397 1597 update_attempter.cc:306] Processing Done. Jan 28 01:21:24.678627 update_engine[1597]: I20260128 01:21:24.565555 1597 update_attempter.cc:310] Error event sent. Jan 28 01:21:24.678627 update_engine[1597]: I20260128 01:21:24.565680 1597 update_check_scheduler.cc:74] Next update check in 43m56s Jan 28 01:21:27.376612 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:21:27.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:21:27.416248 kernel: audit: type=1130 audit(1769563287.382:313): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:21:27.698832 (kubelet)[2453]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:21:29.376221 kubelet[2453]: E0128 01:21:29.373341 2453 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:21:29.404619 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:21:29.404986 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:21:29.407540 systemd[1]: kubelet.service: Consumed 2.295s CPU time, 108.4M memory peak. Jan 28 01:21:29.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:21:29.459283 kernel: audit: type=1131 audit(1769563289.406:314): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:21:39.636177 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 28 01:21:39.661347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:21:40.268280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:21:40.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:21:40.295129 kernel: audit: type=1130 audit(1769563300.266:315): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:21:40.308866 (kubelet)[2473]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:21:40.763959 containerd[1624]: time="2026-01-28T01:21:40.760484903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:40.775435 containerd[1624]: time="2026-01-28T01:21:40.769117845Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58919065" Jan 28 01:21:40.775435 containerd[1624]: time="2026-01-28T01:21:40.773476015Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:41.083150 containerd[1624]: time="2026-01-28T01:21:41.066153632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:41.083150 containerd[1624]: time="2026-01-28T01:21:41.072840319Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 29.41919774s" Jan 28 01:21:41.083150 containerd[1624]: time="2026-01-28T01:21:41.074233123Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 28 01:21:41.311944 kubelet[2473]: E0128 01:21:41.311343 2473 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:21:41.330105 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:21:41.330422 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:21:41.331512 systemd[1]: kubelet.service: Consumed 837ms CPU time, 110.2M memory peak. Jan 28 01:21:41.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:21:41.362212 kernel: audit: type=1131 audit(1769563301.330:316): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:21:51.420438 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 28 01:21:51.562964 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:21:55.376672 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 01:21:55.428467 kernel: audit: type=1130 audit(1769563315.379:317): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:21:55.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:21:55.377711 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 01:21:55.379691 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:21:55.430657 systemd[1]: kubelet.service: Consumed 1.026s CPU time, 98.6M memory peak. Jan 28 01:21:55.770263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:21:56.889079 systemd[1]: Reload requested from client PID 2519 ('systemctl') (unit session-10.scope)... Jan 28 01:21:56.889735 systemd[1]: Reloading... Jan 28 01:21:57.174936 zram_generator::config[2569]: No configuration found. Jan 28 01:22:01.323769 systemd[1]: Reloading finished in 4433 ms. Jan 28 01:22:01.407000 audit: BPF prog-id=63 op=LOAD Jan 28 01:22:01.414118 kernel: audit: type=1334 audit(1769563321.407:318): prog-id=63 op=LOAD Jan 28 01:22:01.407000 audit: BPF prog-id=50 op=UNLOAD Jan 28 01:22:01.407000 audit: BPF prog-id=64 op=LOAD Jan 28 01:22:01.421790 kernel: audit: type=1334 audit(1769563321.407:319): prog-id=50 op=UNLOAD Jan 28 01:22:01.421917 kernel: audit: type=1334 audit(1769563321.407:320): prog-id=64 op=LOAD Jan 28 01:22:01.421965 kernel: audit: type=1334 audit(1769563321.407:321): prog-id=65 op=LOAD Jan 28 01:22:01.407000 audit: BPF prog-id=65 op=LOAD Jan 28 01:22:01.427463 kernel: audit: type=1334 audit(1769563321.407:322): prog-id=51 op=UNLOAD Jan 28 01:22:01.407000 audit: BPF prog-id=51 op=UNLOAD Jan 28 01:22:01.438932 kernel: audit: type=1334 audit(1769563321.407:323): prog-id=52 op=UNLOAD Jan 28 01:22:01.407000 audit: BPF prog-id=52 op=UNLOAD Jan 28 01:22:01.461065 kernel: audit: type=1334 audit(1769563321.411:324): prog-id=66 op=LOAD Jan 28 01:22:01.411000 audit: BPF prog-id=66 op=LOAD Jan 28 01:22:01.463898 kernel: audit: type=1334 audit(1769563321.411:325): prog-id=46 op=UNLOAD Jan 28 01:22:01.411000 audit: BPF prog-id=46 op=UNLOAD Jan 28 01:22:01.467000 audit: BPF prog-id=67 op=LOAD Jan 28 01:22:01.479739 kernel: audit: type=1334 audit(1769563321.467:326): prog-id=67 op=LOAD Jan 28 01:22:01.479883 kernel: audit: type=1334 audit(1769563321.467:327): prog-id=47 op=UNLOAD Jan 28 01:22:01.467000 audit: BPF prog-id=47 op=UNLOAD Jan 28 01:22:01.468000 audit: BPF prog-id=68 op=LOAD Jan 28 01:22:01.468000 audit: BPF prog-id=69 op=LOAD Jan 28 01:22:01.468000 audit: BPF prog-id=48 op=UNLOAD Jan 28 01:22:01.468000 audit: BPF prog-id=49 op=UNLOAD Jan 28 01:22:01.472000 audit: BPF prog-id=70 op=LOAD Jan 28 01:22:01.473000 audit: BPF prog-id=55 op=UNLOAD Jan 28 01:22:01.473000 audit: BPF prog-id=71 op=LOAD Jan 28 01:22:01.473000 audit: BPF prog-id=72 op=LOAD Jan 28 01:22:01.473000 audit: BPF prog-id=56 op=UNLOAD Jan 28 01:22:01.473000 audit: BPF prog-id=57 op=UNLOAD Jan 28 01:22:01.475000 audit: BPF prog-id=73 op=LOAD Jan 28 01:22:01.475000 audit: BPF prog-id=58 op=UNLOAD Jan 28 01:22:01.476000 audit: BPF prog-id=74 op=LOAD Jan 28 01:22:01.476000 audit: BPF prog-id=59 op=UNLOAD Jan 28 01:22:01.481000 audit: BPF prog-id=75 op=LOAD Jan 28 01:22:01.481000 audit: BPF prog-id=43 op=UNLOAD Jan 28 01:22:01.481000 audit: BPF prog-id=76 op=LOAD Jan 28 01:22:01.481000 audit: BPF prog-id=77 op=LOAD Jan 28 01:22:01.481000 audit: BPF prog-id=44 op=UNLOAD Jan 28 01:22:01.481000 audit: BPF prog-id=45 op=UNLOAD Jan 28 01:22:01.485000 audit: BPF prog-id=78 op=LOAD Jan 28 01:22:01.486000 audit: BPF prog-id=60 op=UNLOAD Jan 28 01:22:01.486000 audit: BPF prog-id=79 op=LOAD Jan 28 01:22:01.486000 audit: BPF prog-id=80 op=LOAD Jan 28 01:22:01.486000 audit: BPF prog-id=61 op=UNLOAD Jan 28 01:22:01.486000 audit: BPF prog-id=62 op=UNLOAD Jan 28 01:22:01.486000 audit: BPF prog-id=81 op=LOAD Jan 28 01:22:01.487000 audit: BPF prog-id=82 op=LOAD Jan 28 01:22:01.487000 audit: BPF prog-id=53 op=UNLOAD Jan 28 01:22:01.487000 audit: BPF prog-id=54 op=UNLOAD Jan 28 01:22:02.619608 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 01:22:02.619918 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 01:22:02.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:22:02.620562 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:22:02.620642 systemd[1]: kubelet.service: Consumed 692ms CPU time, 98.5M memory peak. Jan 28 01:22:02.665550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:22:04.170587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:22:04.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:22:04.229725 (kubelet)[2614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:22:04.867926 kubelet[2614]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:22:04.867926 kubelet[2614]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:22:04.867926 kubelet[2614]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:22:04.877218 kubelet[2614]: I0128 01:22:04.866656 2614 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:22:06.295338 kubelet[2614]: I0128 01:22:06.291551 2614 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 28 01:22:06.295338 kubelet[2614]: I0128 01:22:06.292250 2614 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:22:06.295338 kubelet[2614]: I0128 01:22:06.294405 2614 server.go:956] "Client rotation is on, will bootstrap in background" Jan 28 01:22:06.906524 kubelet[2614]: E0128 01:22:06.895097 2614 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 01:22:07.018716 kubelet[2614]: I0128 01:22:07.011418 2614 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:22:07.298350 kubelet[2614]: I0128 01:22:07.289278 2614 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 01:22:07.646732 kubelet[2614]: I0128 01:22:07.645699 2614 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 01:22:07.651980 kubelet[2614]: I0128 01:22:07.650610 2614 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:22:07.654413 kubelet[2614]: I0128 01:22:07.650776 2614 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 01:22:07.655200 kubelet[2614]: I0128 01:22:07.654569 2614 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:22:07.655200 kubelet[2614]: I0128 01:22:07.654628 2614 container_manager_linux.go:303] "Creating device plugin manager" Jan 28 01:22:07.655505 kubelet[2614]: I0128 01:22:07.655391 2614 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:22:07.662676 kubelet[2614]: I0128 01:22:07.661710 2614 kubelet.go:480] "Attempting to sync node with API server" Jan 28 01:22:07.662676 kubelet[2614]: I0128 01:22:07.662287 2614 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:22:07.673638 kubelet[2614]: I0128 01:22:07.667327 2614 kubelet.go:386] "Adding apiserver pod source" Jan 28 01:22:07.673638 kubelet[2614]: I0128 01:22:07.667360 2614 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:22:07.688459 kubelet[2614]: E0128 01:22:07.688117 2614 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 01:22:07.692253 kubelet[2614]: E0128 01:22:07.688777 2614 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 01:22:07.768526 kubelet[2614]: I0128 01:22:07.768485 2614 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 28 01:22:07.783858 kubelet[2614]: I0128 01:22:07.783686 2614 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 28 01:22:07.792783 kubelet[2614]: W0128 01:22:07.791849 2614 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 01:22:07.814622 kubelet[2614]: I0128 01:22:07.814528 2614 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 01:22:07.814790 kubelet[2614]: I0128 01:22:07.814750 2614 server.go:1289] "Started kubelet" Jan 28 01:22:07.824105 kubelet[2614]: I0128 01:22:07.818670 2614 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:22:07.824105 kubelet[2614]: I0128 01:22:07.822678 2614 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:22:07.992202 kubelet[2614]: I0128 01:22:07.817625 2614 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:22:07.992202 kubelet[2614]: I0128 01:22:07.972234 2614 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:22:07.992202 kubelet[2614]: I0128 01:22:07.973868 2614 server.go:317] "Adding debug handlers to kubelet server" Jan 28 01:22:07.997366 kubelet[2614]: I0128 01:22:07.995193 2614 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:22:08.016992 kubelet[2614]: E0128 01:22:07.991463 2614 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.61:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.61:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ec07899f2adff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:22:07.814610431 +0000 UTC m=+3.535119943,LastTimestamp:2026-01-28 01:22:07.814610431 +0000 UTC m=+3.535119943,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:22:08.016992 kubelet[2614]: I0128 01:22:08.013847 2614 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 01:22:08.016992 kubelet[2614]: I0128 01:22:08.014580 2614 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 01:22:08.016992 kubelet[2614]: I0128 01:22:08.014876 2614 reconciler.go:26] "Reconciler: start to sync state" Jan 28 01:22:08.016992 kubelet[2614]: E0128 01:22:08.015722 2614 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 01:22:08.016992 kubelet[2614]: E0128 01:22:08.016279 2614 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:22:08.016992 kubelet[2614]: E0128 01:22:08.016429 2614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="200ms" Jan 28 01:22:08.060294 kubelet[2614]: I0128 01:22:08.053216 2614 factory.go:223] Registration of the systemd container factory successfully Jan 28 01:22:08.060524 kubelet[2614]: I0128 01:22:08.053472 2614 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:22:08.062480 kubelet[2614]: E0128 01:22:08.062154 2614 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:22:08.077984 kubelet[2614]: I0128 01:22:08.075331 2614 factory.go:223] Registration of the containerd container factory successfully Jan 28 01:22:08.117924 kubelet[2614]: E0128 01:22:08.116342 2614 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:22:08.125000 audit[2633]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2633 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:22:08.146685 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 28 01:22:08.146934 kernel: audit: type=1325 audit(1769563328.125:360): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2633 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:22:08.125000 audit[2633]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd44eccbb0 a2=0 a3=0 items=0 ppid=2614 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:08.204932 kubelet[2614]: I0128 01:22:08.204098 2614 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:22:08.204932 kubelet[2614]: I0128 01:22:08.204123 2614 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:22:08.204932 kubelet[2614]: I0128 01:22:08.204154 2614 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:22:08.255768 kernel: audit: type=1300 audit(1769563328.125:360): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd44eccbb0 a2=0 a3=0 items=0 ppid=2614 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:08.372632 kernel: audit: type=1327 audit(1769563328.125:360): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 01:22:08.374159 kernel: audit: type=1325 audit(1769563328.150:361): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2635 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:22:08.374649 kernel: audit: type=1300 audit(1769563328.150:361): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe66a3d330 a2=0 a3=0 items=0 ppid=2614 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:08.374706 kernel: audit: type=1327 audit(1769563328.150:361): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 01:22:08.374735 kernel: audit: type=1325 audit(1769563328.183:362): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2638 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:22:08.374864 kernel: audit: type=1300 audit(1769563328.183:362): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc6bf63790 a2=0 a3=0 items=0 ppid=2614 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:08.125000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 01:22:08.150000 audit[2635]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2635 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:22:08.150000 audit[2635]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe66a3d330 a2=0 a3=0 items=0 ppid=2614 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:08.150000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 01:22:08.382154 kernel: audit: type=1327 audit(1769563328.183:362): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 01:22:08.183000 audit[2638]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2638 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:22:08.183000 audit[2638]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc6bf63790 a2=0 a3=0 items=0 ppid=2614 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:08.183000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 01:22:08.382607 kubelet[2614]: E0128 01:22:08.373136 2614 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:22:08.382607 kubelet[2614]: I0128 01:22:08.375545 2614 policy_none.go:49] "None policy: Start" Jan 28 01:22:08.382607 kubelet[2614]: I0128 01:22:08.375731 2614 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 01:22:08.382607 kubelet[2614]: I0128 01:22:08.375760 2614 state_mem.go:35] "Initializing new in-memory state store" Jan 28 01:22:08.382607 kubelet[2614]: E0128 01:22:08.379753 2614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="400ms" Jan 28 01:22:08.396465 kernel: audit: type=1325 audit(1769563328.364:363): table=filter:45 family=2 entries=2 op=nft_register_chain pid=2641 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:22:08.364000 audit[2641]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2641 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:22:08.425652 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 01:22:08.364000 audit[2641]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc53ffd970 a2=0 a3=0 items=0 ppid=2614 pid=2641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:08.364000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 01:22:08.466000 audit[2645]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2645 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:22:08.466000 audit[2645]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff04fdcb90 a2=0 a3=0 items=0 ppid=2614 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:08.466000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 28 01:22:08.472000 audit[2648]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2648 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:22:08.472000 audit[2648]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe0d8594a0 a2=0 a3=0 items=0 ppid=2614 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:08.472000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 28 01:22:08.477000 audit[2647]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2647 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:22:08.477000 audit[2647]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffa5ca8050 a2=0 a3=0 items=0 ppid=2614 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:08.477000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 01:22:08.486396 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 01:22:08.488720 kubelet[2614]: I0128 01:22:08.468266 2614 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 28 01:22:08.488720 kubelet[2614]: I0128 01:22:08.478702 2614 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 28 01:22:08.488720 kubelet[2614]: I0128 01:22:08.478793 2614 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 28 01:22:08.488720 kubelet[2614]: I0128 01:22:08.478915 2614 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:22:08.488720 kubelet[2614]: I0128 01:22:08.478977 2614 kubelet.go:2436] "Starting kubelet main sync loop" Jan 28 01:22:08.488720 kubelet[2614]: E0128 01:22:08.479207 2614 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:22:08.488720 kubelet[2614]: E0128 01:22:08.479914 2614 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:22:08.500000 audit[2649]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2649 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:22:08.500000 audit[2649]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd8af2fa60 a2=0 a3=0 items=0 ppid=2614 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:08.500000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 28 01:22:08.501000 audit[2650]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2650 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:22:08.501000 audit[2650]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd6413670 a2=0 a3=0 items=0 ppid=2614 pid=2650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:08.501000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 28 01:22:08.504661 kubelet[2614]: E0128 01:22:08.503970 2614 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 01:22:08.508000 audit[2652]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2652 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:22:08.508000 audit[2652]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffc2905ad0 a2=0 a3=0 items=0 ppid=2614 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:08.508000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 28 01:22:08.515000 audit[2651]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2651 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:22:08.515000 audit[2651]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3477d5d0 a2=0 a3=0 items=0 ppid=2614 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:08.515000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 28 01:22:08.530000 audit[2653]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2653 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:22:08.530000 audit[2653]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffda9638040 a2=0 a3=0 items=0 ppid=2614 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:08.530000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 28 01:22:08.550463 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 01:22:08.565972 kubelet[2614]: E0128 01:22:08.565705 2614 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 28 01:22:08.571162 kubelet[2614]: I0128 01:22:08.570242 2614 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:22:08.571162 kubelet[2614]: I0128 01:22:08.570355 2614 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:22:08.571669 kubelet[2614]: I0128 01:22:08.571527 2614 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:22:08.576596 kubelet[2614]: E0128 01:22:08.576451 2614 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:22:08.576596 kubelet[2614]: E0128 01:22:08.576544 2614 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:22:08.656161 kubelet[2614]: I0128 01:22:08.652616 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:22:08.656161 kubelet[2614]: I0128 01:22:08.652720 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:22:08.656161 kubelet[2614]: I0128 01:22:08.653345 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5864bb654a2cf064f58c4da03d860b2b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5864bb654a2cf064f58c4da03d860b2b\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:22:08.654983 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 28 01:22:08.674127 kubelet[2614]: I0128 01:22:08.662203 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5864bb654a2cf064f58c4da03d860b2b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5864bb654a2cf064f58c4da03d860b2b\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:22:08.674127 kubelet[2614]: I0128 01:22:08.662297 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:22:08.674127 kubelet[2614]: I0128 01:22:08.662333 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:22:08.674127 kubelet[2614]: I0128 01:22:08.662360 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:22:08.674127 kubelet[2614]: I0128 01:22:08.662387 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 28 01:22:08.674411 kubelet[2614]: I0128 01:22:08.662418 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5864bb654a2cf064f58c4da03d860b2b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5864bb654a2cf064f58c4da03d860b2b\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:22:08.681837 kubelet[2614]: I0128 01:22:08.681348 2614 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:22:08.685307 kubelet[2614]: E0128 01:22:08.684347 2614 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Jan 28 01:22:08.686466 kubelet[2614]: E0128 01:22:08.686370 2614 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:22:08.706418 systemd[1]: Created slice kubepods-burstable-pod5864bb654a2cf064f58c4da03d860b2b.slice - libcontainer container kubepods-burstable-pod5864bb654a2cf064f58c4da03d860b2b.slice. Jan 28 01:22:08.722598 kubelet[2614]: E0128 01:22:08.722375 2614 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:22:08.736390 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 28 01:22:08.749121 kubelet[2614]: E0128 01:22:08.748384 2614 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:22:08.787624 kubelet[2614]: E0128 01:22:08.786412 2614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="800ms" Jan 28 01:22:09.120923 kubelet[2614]: E0128 01:22:09.118859 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:09.124611 kubelet[2614]: E0128 01:22:09.124325 2614 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 01:22:09.125483 kubelet[2614]: E0128 01:22:09.125234 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:09.125483 kubelet[2614]: E0128 01:22:09.125438 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:09.125668 kubelet[2614]: I0128 01:22:09.125589 2614 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:22:09.127869 kubelet[2614]: E0128 01:22:09.127645 2614 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Jan 28 01:22:09.128672 kubelet[2614]: E0128 01:22:09.128261 2614 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 01:22:09.130583 kubelet[2614]: E0128 01:22:09.130347 2614 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 01:22:09.133245 containerd[1624]: time="2026-01-28T01:22:09.132771076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5864bb654a2cf064f58c4da03d860b2b,Namespace:kube-system,Attempt:0,}" Jan 28 01:22:09.145278 containerd[1624]: time="2026-01-28T01:22:09.132720688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 28 01:22:09.145278 containerd[1624]: time="2026-01-28T01:22:09.132969882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 28 01:22:09.485252 kubelet[2614]: E0128 01:22:09.477900 2614 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 01:22:09.491420 containerd[1624]: time="2026-01-28T01:22:09.491236676Z" level=info msg="connecting to shim 03470fbad2c7fceb2309185af06fa064e58fbc8274d3884b4127c74e9dd9889f" address="unix:///run/containerd/s/c38d794bdb9b2efe1074e8d69e0448e6a704ea4368b55926dec67c4143092ac4" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:22:09.523703 containerd[1624]: time="2026-01-28T01:22:09.523595431Z" level=info msg="connecting to shim ff07cf07eb31cf63ce75395e89aca1bb795d16d252adbc8aa23df91c7f9cf1c8" address="unix:///run/containerd/s/1ba2154d6325ff07a5f2e4e2b293ec2caf8e0ded59a4285b06f87193516c717f" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:22:09.556244 kubelet[2614]: I0128 01:22:09.555464 2614 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:22:09.557628 kubelet[2614]: E0128 01:22:09.557528 2614 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Jan 28 01:22:09.597156 kubelet[2614]: E0128 01:22:09.589231 2614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="1.6s" Jan 28 01:22:09.597307 containerd[1624]: time="2026-01-28T01:22:09.596676647Z" level=info msg="connecting to shim 2debcf68800065793a848da76ffca59e49f3331b44a73e75487c6f1bf2c02ec5" address="unix:///run/containerd/s/027b54facc0bcb646ab506bf6e64eb4d7e2036b8272bd0f485eea017dcbc0840" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:22:09.999087 kubelet[2614]: E0128 01:22:09.995708 2614 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 01:22:10.766200 systemd[1]: Started cri-containerd-03470fbad2c7fceb2309185af06fa064e58fbc8274d3884b4127c74e9dd9889f.scope - libcontainer container 03470fbad2c7fceb2309185af06fa064e58fbc8274d3884b4127c74e9dd9889f. Jan 28 01:22:10.793349 kubelet[2614]: I0128 01:22:10.792394 2614 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:22:10.798141 kubelet[2614]: E0128 01:22:10.793640 2614 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Jan 28 01:22:10.818136 systemd[1]: Started cri-containerd-ff07cf07eb31cf63ce75395e89aca1bb795d16d252adbc8aa23df91c7f9cf1c8.scope - libcontainer container ff07cf07eb31cf63ce75395e89aca1bb795d16d252adbc8aa23df91c7f9cf1c8. Jan 28 01:22:10.982000 audit: BPF prog-id=83 op=LOAD Jan 28 01:22:10.987000 audit: BPF prog-id=84 op=LOAD Jan 28 01:22:10.987000 audit[2719]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2677 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:10.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666303763663037656233316366363363653735333935653839616361 Jan 28 01:22:10.987000 audit: BPF prog-id=84 op=UNLOAD Jan 28 01:22:10.987000 audit[2719]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2677 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:10.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666303763663037656233316366363363653735333935653839616361 Jan 28 01:22:11.306376 kubelet[2614]: E0128 01:22:11.304862 2614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="3.2s" Jan 28 01:22:11.306376 kubelet[2614]: E0128 01:22:11.305079 2614 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 01:22:11.306376 kubelet[2614]: E0128 01:22:11.305225 2614 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 01:22:11.306376 kubelet[2614]: E0128 01:22:11.305899 2614 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 01:22:11.312000 audit: BPF prog-id=85 op=LOAD Jan 28 01:22:11.312000 audit[2719]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2677 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:11.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666303763663037656233316366363363653735333935653839616361 Jan 28 01:22:11.315000 audit: BPF prog-id=86 op=LOAD Jan 28 01:22:11.315000 audit[2719]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2677 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:11.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666303763663037656233316366363363653735333935653839616361 Jan 28 01:22:11.315000 audit: BPF prog-id=86 op=UNLOAD Jan 28 01:22:11.315000 audit[2719]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2677 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:11.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666303763663037656233316366363363653735333935653839616361 Jan 28 01:22:11.316000 audit: BPF prog-id=85 op=UNLOAD Jan 28 01:22:11.316000 audit[2719]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2677 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:11.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666303763663037656233316366363363653735333935653839616361 Jan 28 01:22:11.317000 audit: BPF prog-id=87 op=LOAD Jan 28 01:22:11.317000 audit[2719]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2677 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:11.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666303763663037656233316366363363653735333935653839616361 Jan 28 01:22:11.347566 systemd[1]: Started cri-containerd-2debcf68800065793a848da76ffca59e49f3331b44a73e75487c6f1bf2c02ec5.scope - libcontainer container 2debcf68800065793a848da76ffca59e49f3331b44a73e75487c6f1bf2c02ec5. Jan 28 01:22:11.396000 audit: BPF prog-id=88 op=LOAD Jan 28 01:22:11.404000 audit: BPF prog-id=89 op=LOAD Jan 28 01:22:11.404000 audit[2697]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00019c238 a2=98 a3=0 items=0 ppid=2673 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:11.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033343730666261643263376663656232333039313835616630366661 Jan 28 01:22:11.404000 audit: BPF prog-id=89 op=UNLOAD Jan 28 01:22:11.404000 audit[2697]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2673 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:11.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033343730666261643263376663656232333039313835616630366661 Jan 28 01:22:11.404000 audit: BPF prog-id=90 op=LOAD Jan 28 01:22:11.404000 audit[2697]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00019c488 a2=98 a3=0 items=0 ppid=2673 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:11.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033343730666261643263376663656232333039313835616630366661 Jan 28 01:22:11.404000 audit: BPF prog-id=91 op=LOAD Jan 28 01:22:11.404000 audit[2697]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00019c218 a2=98 a3=0 items=0 ppid=2673 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:11.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033343730666261643263376663656232333039313835616630366661 Jan 28 01:22:11.404000 audit: BPF prog-id=91 op=UNLOAD Jan 28 01:22:11.404000 audit[2697]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2673 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:11.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033343730666261643263376663656232333039313835616630366661 Jan 28 01:22:11.404000 audit: BPF prog-id=90 op=UNLOAD Jan 28 01:22:11.404000 audit[2697]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2673 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:11.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033343730666261643263376663656232333039313835616630366661 Jan 28 01:22:11.404000 audit: BPF prog-id=92 op=LOAD Jan 28 01:22:11.404000 audit[2697]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00019c6e8 a2=98 a3=0 items=0 ppid=2673 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:11.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033343730666261643263376663656232333039313835616630366661 Jan 28 01:22:11.454000 audit: BPF prog-id=93 op=LOAD Jan 28 01:22:11.482000 audit: BPF prog-id=94 op=LOAD Jan 28 01:22:11.482000 audit[2717]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2690 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:11.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264656263663638383030303635373933613834386461373666666361 Jan 28 01:22:11.483000 audit: BPF prog-id=94 op=UNLOAD Jan 28 01:22:11.483000 audit[2717]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2690 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:11.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264656263663638383030303635373933613834386461373666666361 Jan 28 01:22:11.483000 audit: BPF prog-id=95 op=LOAD Jan 28 01:22:11.483000 audit[2717]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2690 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:11.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264656263663638383030303635373933613834386461373666666361 Jan 28 01:22:11.483000 audit: BPF prog-id=96 op=LOAD Jan 28 01:22:11.483000 audit[2717]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2690 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:11.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264656263663638383030303635373933613834386461373666666361 Jan 28 01:22:11.483000 audit: BPF prog-id=96 op=UNLOAD Jan 28 01:22:11.483000 audit[2717]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2690 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:11.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264656263663638383030303635373933613834386461373666666361 Jan 28 01:22:11.483000 audit: BPF prog-id=95 op=UNLOAD Jan 28 01:22:11.483000 audit[2717]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2690 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:11.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264656263663638383030303635373933613834386461373666666361 Jan 28 01:22:11.483000 audit: BPF prog-id=97 op=LOAD Jan 28 01:22:11.483000 audit[2717]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2690 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:11.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264656263663638383030303635373933613834386461373666666361 Jan 28 01:22:11.872957 kubelet[2614]: E0128 01:22:11.871365 2614 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 01:22:11.884141 containerd[1624]: time="2026-01-28T01:22:11.884097641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff07cf07eb31cf63ce75395e89aca1bb795d16d252adbc8aa23df91c7f9cf1c8\"" Jan 28 01:22:11.906409 kubelet[2614]: E0128 01:22:11.906199 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:11.942475 containerd[1624]: time="2026-01-28T01:22:11.942417113Z" level=info msg="CreateContainer within sandbox \"ff07cf07eb31cf63ce75395e89aca1bb795d16d252adbc8aa23df91c7f9cf1c8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 01:22:11.984476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1686364729.mount: Deactivated successfully. Jan 28 01:22:12.005075 containerd[1624]: time="2026-01-28T01:22:12.003193934Z" level=info msg="Container db4612bc9e5b7b0aa1690679ae751c0b84d64ddd2ef8384ab8cc5871481bd9b2: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:22:12.006191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount400831766.mount: Deactivated successfully. Jan 28 01:22:12.065121 containerd[1624]: time="2026-01-28T01:22:12.064699891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5864bb654a2cf064f58c4da03d860b2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2debcf68800065793a848da76ffca59e49f3331b44a73e75487c6f1bf2c02ec5\"" Jan 28 01:22:12.072978 kubelet[2614]: E0128 01:22:12.071896 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:12.327297 containerd[1624]: time="2026-01-28T01:22:12.326496010Z" level=info msg="CreateContainer within sandbox \"2debcf68800065793a848da76ffca59e49f3331b44a73e75487c6f1bf2c02ec5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 01:22:12.333165 containerd[1624]: time="2026-01-28T01:22:12.328869527Z" level=info msg="CreateContainer within sandbox \"ff07cf07eb31cf63ce75395e89aca1bb795d16d252adbc8aa23df91c7f9cf1c8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"db4612bc9e5b7b0aa1690679ae751c0b84d64ddd2ef8384ab8cc5871481bd9b2\"" Jan 28 01:22:12.334351 containerd[1624]: time="2026-01-28T01:22:12.333461819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"03470fbad2c7fceb2309185af06fa064e58fbc8274d3884b4127c74e9dd9889f\"" Jan 28 01:22:12.344577 kubelet[2614]: E0128 01:22:12.343672 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:12.346247 containerd[1624]: time="2026-01-28T01:22:12.346157381Z" level=info msg="StartContainer for \"db4612bc9e5b7b0aa1690679ae751c0b84d64ddd2ef8384ab8cc5871481bd9b2\"" Jan 28 01:22:12.359390 containerd[1624]: time="2026-01-28T01:22:12.355630623Z" level=info msg="connecting to shim db4612bc9e5b7b0aa1690679ae751c0b84d64ddd2ef8384ab8cc5871481bd9b2" address="unix:///run/containerd/s/1ba2154d6325ff07a5f2e4e2b293ec2caf8e0ded59a4285b06f87193516c717f" protocol=ttrpc version=3 Jan 28 01:22:12.360798 kubelet[2614]: E0128 01:22:12.360510 2614 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.61:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.61:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ec07899f2adff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:22:07.814610431 +0000 UTC m=+3.535119943,LastTimestamp:2026-01-28 01:22:07.814610431 +0000 UTC m=+3.535119943,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:22:12.386113 containerd[1624]: time="2026-01-28T01:22:12.381758177Z" level=info msg="CreateContainer within sandbox \"03470fbad2c7fceb2309185af06fa064e58fbc8274d3884b4127c74e9dd9889f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 01:22:12.401752 kubelet[2614]: I0128 01:22:12.400634 2614 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:22:12.405651 containerd[1624]: time="2026-01-28T01:22:12.404245006Z" level=info msg="Container 95a3159570827a077ec2ade362d11837d7594c5a8087c9822fbee29b4ac0835f: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:22:12.406532 kubelet[2614]: E0128 01:22:12.406353 2614 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Jan 28 01:22:12.446204 containerd[1624]: time="2026-01-28T01:22:12.446149771Z" level=info msg="CreateContainer within sandbox \"2debcf68800065793a848da76ffca59e49f3331b44a73e75487c6f1bf2c02ec5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"95a3159570827a077ec2ade362d11837d7594c5a8087c9822fbee29b4ac0835f\"" Jan 28 01:22:12.447349 containerd[1624]: time="2026-01-28T01:22:12.447313316Z" level=info msg="StartContainer for \"95a3159570827a077ec2ade362d11837d7594c5a8087c9822fbee29b4ac0835f\"" Jan 28 01:22:12.449918 containerd[1624]: time="2026-01-28T01:22:12.449880818Z" level=info msg="connecting to shim 95a3159570827a077ec2ade362d11837d7594c5a8087c9822fbee29b4ac0835f" address="unix:///run/containerd/s/027b54facc0bcb646ab506bf6e64eb4d7e2036b8272bd0f485eea017dcbc0840" protocol=ttrpc version=3 Jan 28 01:22:12.497668 containerd[1624]: time="2026-01-28T01:22:12.497112605Z" level=info msg="Container 66714d9fe0971c920b68e90921dd5a364dd9fe47f54dda900904d370d1797212: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:22:12.554666 systemd[1]: Started cri-containerd-db4612bc9e5b7b0aa1690679ae751c0b84d64ddd2ef8384ab8cc5871481bd9b2.scope - libcontainer container db4612bc9e5b7b0aa1690679ae751c0b84d64ddd2ef8384ab8cc5871481bd9b2. Jan 28 01:22:12.632333 containerd[1624]: time="2026-01-28T01:22:12.631578953Z" level=info msg="CreateContainer within sandbox \"03470fbad2c7fceb2309185af06fa064e58fbc8274d3884b4127c74e9dd9889f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"66714d9fe0971c920b68e90921dd5a364dd9fe47f54dda900904d370d1797212\"" Jan 28 01:22:12.652418 containerd[1624]: time="2026-01-28T01:22:12.646677929Z" level=info msg="StartContainer for \"66714d9fe0971c920b68e90921dd5a364dd9fe47f54dda900904d370d1797212\"" Jan 28 01:22:12.677318 containerd[1624]: time="2026-01-28T01:22:12.664092840Z" level=info msg="connecting to shim 66714d9fe0971c920b68e90921dd5a364dd9fe47f54dda900904d370d1797212" address="unix:///run/containerd/s/c38d794bdb9b2efe1074e8d69e0448e6a704ea4368b55926dec67c4143092ac4" protocol=ttrpc version=3 Jan 28 01:22:12.702976 systemd[1]: Started cri-containerd-95a3159570827a077ec2ade362d11837d7594c5a8087c9822fbee29b4ac0835f.scope - libcontainer container 95a3159570827a077ec2ade362d11837d7594c5a8087c9822fbee29b4ac0835f. Jan 28 01:22:13.219000 audit: BPF prog-id=98 op=LOAD Jan 28 01:22:13.284736 kernel: kauditd_printk_skb: 92 callbacks suppressed Jan 28 01:22:13.285297 kernel: audit: type=1334 audit(1769563333.219:396): prog-id=98 op=LOAD Jan 28 01:22:13.363548 kubelet[2614]: E0128 01:22:13.363239 2614 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 01:22:13.359000 audit: BPF prog-id=99 op=LOAD Jan 28 01:22:13.386282 kernel: audit: type=1334 audit(1769563333.359:397): prog-id=99 op=LOAD Jan 28 01:22:13.359000 audit[2789]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2677 pid=2789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:13.481199 kernel: audit: type=1300 audit(1769563333.359:397): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2677 pid=2789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:13.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462343631326263396535623762306161313639303637396165373531 Jan 28 01:22:13.507891 kernel: audit: type=1327 audit(1769563333.359:397): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462343631326263396535623762306161313639303637396165373531 Jan 28 01:22:13.508154 kernel: audit: type=1334 audit(1769563333.359:398): prog-id=99 op=UNLOAD Jan 28 01:22:13.359000 audit: BPF prog-id=99 op=UNLOAD Jan 28 01:22:13.507389 systemd[1]: Started cri-containerd-66714d9fe0971c920b68e90921dd5a364dd9fe47f54dda900904d370d1797212.scope - libcontainer container 66714d9fe0971c920b68e90921dd5a364dd9fe47f54dda900904d370d1797212. Jan 28 01:22:13.359000 audit[2789]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2677 pid=2789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:13.536961 kernel: audit: type=1300 audit(1769563333.359:398): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2677 pid=2789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:13.537583 kernel: audit: type=1327 audit(1769563333.359:398): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462343631326263396535623762306161313639303637396165373531 Jan 28 01:22:13.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462343631326263396535623762306161313639303637396165373531 Jan 28 01:22:13.574075 kernel: audit: type=1334 audit(1769563333.359:399): prog-id=100 op=LOAD Jan 28 01:22:13.359000 audit: BPF prog-id=100 op=LOAD Jan 28 01:22:13.594787 kernel: audit: type=1300 audit(1769563333.359:399): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2677 pid=2789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:13.359000 audit[2789]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2677 pid=2789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:13.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462343631326263396535623762306161313639303637396165373531 Jan 28 01:22:13.370000 audit: BPF prog-id=101 op=LOAD Jan 28 01:22:13.370000 audit[2789]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2677 pid=2789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:13.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462343631326263396535623762306161313639303637396165373531 Jan 28 01:22:13.370000 audit: BPF prog-id=101 op=UNLOAD Jan 28 01:22:13.370000 audit[2789]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2677 pid=2789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:13.619122 kernel: audit: type=1327 audit(1769563333.359:399): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462343631326263396535623762306161313639303637396165373531 Jan 28 01:22:13.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462343631326263396535623762306161313639303637396165373531 Jan 28 01:22:13.370000 audit: BPF prog-id=100 op=UNLOAD Jan 28 01:22:13.370000 audit[2789]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2677 pid=2789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:13.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462343631326263396535623762306161313639303637396165373531 Jan 28 01:22:13.370000 audit: BPF prog-id=102 op=LOAD Jan 28 01:22:13.370000 audit[2789]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2677 pid=2789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:13.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462343631326263396535623762306161313639303637396165373531 Jan 28 01:22:13.536000 audit: BPF prog-id=103 op=LOAD Jan 28 01:22:13.554000 audit: BPF prog-id=104 op=LOAD Jan 28 01:22:13.554000 audit[2801]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2690 pid=2801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:13.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935613331353935373038323761303737656332616465333632643131 Jan 28 01:22:13.554000 audit: BPF prog-id=104 op=UNLOAD Jan 28 01:22:13.554000 audit[2801]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2690 pid=2801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:13.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935613331353935373038323761303737656332616465333632643131 Jan 28 01:22:13.554000 audit: BPF prog-id=105 op=LOAD Jan 28 01:22:13.554000 audit[2801]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2690 pid=2801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:13.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935613331353935373038323761303737656332616465333632643131 Jan 28 01:22:13.554000 audit: BPF prog-id=106 op=LOAD Jan 28 01:22:13.554000 audit[2801]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2690 pid=2801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:13.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935613331353935373038323761303737656332616465333632643131 Jan 28 01:22:13.554000 audit: BPF prog-id=106 op=UNLOAD Jan 28 01:22:13.554000 audit[2801]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2690 pid=2801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:13.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935613331353935373038323761303737656332616465333632643131 Jan 28 01:22:13.554000 audit: BPF prog-id=105 op=UNLOAD Jan 28 01:22:13.554000 audit[2801]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2690 pid=2801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:13.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935613331353935373038323761303737656332616465333632643131 Jan 28 01:22:13.554000 audit: BPF prog-id=107 op=LOAD Jan 28 01:22:13.554000 audit[2801]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2690 pid=2801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:13.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935613331353935373038323761303737656332616465333632643131 Jan 28 01:22:13.717000 audit: BPF prog-id=108 op=LOAD Jan 28 01:22:13.789000 audit: BPF prog-id=109 op=LOAD Jan 28 01:22:13.789000 audit[2822]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2673 pid=2822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:13.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636373134643966653039373163393230623638653930393231646435 Jan 28 01:22:13.789000 audit: BPF prog-id=109 op=UNLOAD Jan 28 01:22:13.789000 audit[2822]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2673 pid=2822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:13.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636373134643966653039373163393230623638653930393231646435 Jan 28 01:22:13.789000 audit: BPF prog-id=110 op=LOAD Jan 28 01:22:13.789000 audit[2822]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2673 pid=2822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:13.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636373134643966653039373163393230623638653930393231646435 Jan 28 01:22:13.789000 audit: BPF prog-id=111 op=LOAD Jan 28 01:22:13.789000 audit[2822]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2673 pid=2822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:13.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636373134643966653039373163393230623638653930393231646435 Jan 28 01:22:13.789000 audit: BPF prog-id=111 op=UNLOAD Jan 28 01:22:13.789000 audit[2822]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2673 pid=2822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:13.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636373134643966653039373163393230623638653930393231646435 Jan 28 01:22:13.789000 audit: BPF prog-id=110 op=UNLOAD Jan 28 01:22:13.789000 audit[2822]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2673 pid=2822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:13.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636373134643966653039373163393230623638653930393231646435 Jan 28 01:22:13.789000 audit: BPF prog-id=112 op=LOAD Jan 28 01:22:13.789000 audit[2822]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2673 pid=2822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:13.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636373134643966653039373163393230623638653930393231646435 Jan 28 01:22:13.912402 containerd[1624]: time="2026-01-28T01:22:13.912181736Z" level=info msg="StartContainer for \"db4612bc9e5b7b0aa1690679ae751c0b84d64ddd2ef8384ab8cc5871481bd9b2\" returns successfully" Jan 28 01:22:14.523611 kubelet[2614]: E0128 01:22:14.523363 2614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="6.4s" Jan 28 01:22:14.561711 containerd[1624]: time="2026-01-28T01:22:14.561553195Z" level=info msg="StartContainer for \"66714d9fe0971c920b68e90921dd5a364dd9fe47f54dda900904d370d1797212\" returns successfully" Jan 28 01:22:14.574975 containerd[1624]: time="2026-01-28T01:22:14.574215414Z" level=info msg="StartContainer for \"95a3159570827a077ec2ade362d11837d7594c5a8087c9822fbee29b4ac0835f\" returns successfully" Jan 28 01:22:14.617619 kubelet[2614]: E0128 01:22:14.617250 2614 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:22:14.618355 kubelet[2614]: E0128 01:22:14.618327 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:15.660509 kubelet[2614]: I0128 01:22:15.657893 2614 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:22:15.683687 kubelet[2614]: E0128 01:22:15.668182 2614 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:22:15.683687 kubelet[2614]: E0128 01:22:15.668446 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:15.683687 kubelet[2614]: E0128 01:22:15.670753 2614 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:22:15.683687 kubelet[2614]: E0128 01:22:15.681158 2614 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:22:15.683687 kubelet[2614]: E0128 01:22:15.681325 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:15.683687 kubelet[2614]: E0128 01:22:15.681918 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:17.399313 kubelet[2614]: E0128 01:22:17.393506 2614 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:22:17.413396 kubelet[2614]: E0128 01:22:17.403862 2614 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:22:17.413396 kubelet[2614]: E0128 01:22:17.404431 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:17.413396 kubelet[2614]: E0128 01:22:17.407181 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:18.584149 kubelet[2614]: E0128 01:22:18.578247 2614 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:22:18.710096 kubelet[2614]: E0128 01:22:18.709259 2614 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:22:18.710096 kubelet[2614]: E0128 01:22:18.709714 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:21.438873 kubelet[2614]: E0128 01:22:21.438250 2614 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:22:21.438873 kubelet[2614]: E0128 01:22:21.438784 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:23.252497 kubelet[2614]: E0128 01:22:23.247906 2614 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:22:23.252497 kubelet[2614]: E0128 01:22:23.248327 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:24.528958 kubelet[2614]: E0128 01:22:24.527227 2614 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:22:24.664115 kubelet[2614]: E0128 01:22:24.604428 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:25.286296 kubelet[2614]: E0128 01:22:25.283589 2614 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 01:22:25.681980 kubelet[2614]: E0128 01:22:25.678327 2614 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 28 01:22:26.467902 kubelet[2614]: E0128 01:22:26.456464 2614 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 01:22:26.970192 kubelet[2614]: E0128 01:22:26.962520 2614 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 01:22:26.970192 kubelet[2614]: E0128 01:22:26.966990 2614 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 01:22:28.831192 kubelet[2614]: E0128 01:22:28.822157 2614 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:22:31.174749 kubelet[2614]: E0128 01:22:31.173309 2614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: TLS handshake timeout (Client.Timeout exceeded while awaiting headers)" interval="7s" Jan 28 01:22:32.017710 kubelet[2614]: E0128 01:22:32.016336 2614 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 01:22:32.107973 kubelet[2614]: I0128 01:22:32.107889 2614 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:22:32.837337 kubelet[2614]: E0128 01:22:32.729478 2614 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.61:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188ec07899f2adff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:22:07.814610431 +0000 UTC m=+3.535119943,LastTimestamp:2026-01-28 01:22:07.814610431 +0000 UTC m=+3.535119943,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:22:33.817264 kubelet[2614]: E0128 01:22:33.813698 2614 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:22:33.826094 kubelet[2614]: E0128 01:22:33.825474 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:38.460225 kubelet[2614]: E0128 01:22:38.455768 2614 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 28 01:22:38.613131 kubelet[2614]: I0128 01:22:38.610528 2614 apiserver.go:52] "Watching apiserver" Jan 28 01:22:38.638305 kubelet[2614]: I0128 01:22:38.638268 2614 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 01:22:38.675434 kubelet[2614]: E0128 01:22:38.638531 2614 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 28 01:22:38.856176 kubelet[2614]: E0128 01:22:38.832109 2614 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:22:38.863774 kubelet[2614]: E0128 01:22:38.863747 2614 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:22:38.864481 kubelet[2614]: E0128 01:22:38.864460 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:38.924898 kubelet[2614]: I0128 01:22:38.918951 2614 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 01:22:38.931388 kubelet[2614]: I0128 01:22:38.925899 2614 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 01:22:39.267715 kubelet[2614]: I0128 01:22:39.248475 2614 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 01:22:39.340543 kubelet[2614]: E0128 01:22:39.331777 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:39.340543 kubelet[2614]: E0128 01:22:39.332512 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:39.340543 kubelet[2614]: I0128 01:22:39.335674 2614 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 01:22:40.184986 kubelet[2614]: E0128 01:22:40.184927 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:47.409495 systemd[1]: Reload requested from client PID 2901 ('systemctl') (unit session-10.scope)... Jan 28 01:22:47.410698 systemd[1]: Reloading... Jan 28 01:22:47.833148 zram_generator::config[2947]: No configuration found. Jan 28 01:22:48.919172 kubelet[2614]: I0128 01:22:48.916983 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=9.916958265 podStartE2EDuration="9.916958265s" podCreationTimestamp="2026-01-28 01:22:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:22:48.814483145 +0000 UTC m=+44.534992677" watchObservedRunningTime="2026-01-28 01:22:48.916958265 +0000 UTC m=+44.637467768" Jan 28 01:22:49.073274 kubelet[2614]: I0128 01:22:49.063712 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=10.063693112 podStartE2EDuration="10.063693112s" podCreationTimestamp="2026-01-28 01:22:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:22:48.964792962 +0000 UTC m=+44.685302464" watchObservedRunningTime="2026-01-28 01:22:49.063693112 +0000 UTC m=+44.784202615" Jan 28 01:22:49.159823 systemd[1]: Reloading finished in 1748 ms. Jan 28 01:22:49.407746 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:22:49.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:22:49.467630 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 01:22:49.468229 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:22:49.468319 systemd[1]: kubelet.service: Consumed 8.282s CPU time, 133M memory peak. Jan 28 01:22:49.492146 kernel: kauditd_printk_skb: 56 callbacks suppressed Jan 28 01:22:49.492285 kernel: audit: type=1131 audit(1769563369.465:420): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:22:49.493824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:22:49.500000 audit: BPF prog-id=113 op=LOAD Jan 28 01:22:49.500000 audit: BPF prog-id=63 op=UNLOAD Jan 28 01:22:49.500000 audit: BPF prog-id=114 op=LOAD Jan 28 01:22:49.569743 kernel: audit: type=1334 audit(1769563369.500:421): prog-id=113 op=LOAD Jan 28 01:22:49.569927 kernel: audit: type=1334 audit(1769563369.500:422): prog-id=63 op=UNLOAD Jan 28 01:22:49.570277 kernel: audit: type=1334 audit(1769563369.500:423): prog-id=114 op=LOAD Jan 28 01:22:49.500000 audit: BPF prog-id=115 op=LOAD Jan 28 01:22:49.694860 kernel: audit: type=1334 audit(1769563369.500:424): prog-id=115 op=LOAD Jan 28 01:22:49.695159 kernel: audit: type=1334 audit(1769563369.500:425): prog-id=64 op=UNLOAD Jan 28 01:22:49.695217 kernel: audit: type=1334 audit(1769563369.500:426): prog-id=65 op=UNLOAD Jan 28 01:22:49.500000 audit: BPF prog-id=64 op=UNLOAD Jan 28 01:22:49.500000 audit: BPF prog-id=65 op=UNLOAD Jan 28 01:22:49.507000 audit: BPF prog-id=116 op=LOAD Jan 28 01:22:49.764446 kernel: audit: type=1334 audit(1769563369.507:427): prog-id=116 op=LOAD Jan 28 01:22:49.764716 kernel: audit: type=1334 audit(1769563369.507:428): prog-id=117 op=LOAD Jan 28 01:22:49.507000 audit: BPF prog-id=117 op=LOAD Jan 28 01:22:49.507000 audit: BPF prog-id=81 op=UNLOAD Jan 28 01:22:49.825348 kernel: audit: type=1334 audit(1769563369.507:429): prog-id=81 op=UNLOAD Jan 28 01:22:49.507000 audit: BPF prog-id=82 op=UNLOAD Jan 28 01:22:49.507000 audit: BPF prog-id=118 op=LOAD Jan 28 01:22:49.507000 audit: BPF prog-id=70 op=UNLOAD Jan 28 01:22:49.507000 audit: BPF prog-id=119 op=LOAD Jan 28 01:22:49.507000 audit: BPF prog-id=120 op=LOAD Jan 28 01:22:49.507000 audit: BPF prog-id=71 op=UNLOAD Jan 28 01:22:49.507000 audit: BPF prog-id=72 op=UNLOAD Jan 28 01:22:49.518000 audit: BPF prog-id=121 op=LOAD Jan 28 01:22:49.518000 audit: BPF prog-id=67 op=UNLOAD Jan 28 01:22:49.518000 audit: BPF prog-id=122 op=LOAD Jan 28 01:22:49.518000 audit: BPF prog-id=123 op=LOAD Jan 28 01:22:49.518000 audit: BPF prog-id=68 op=UNLOAD Jan 28 01:22:49.518000 audit: BPF prog-id=69 op=UNLOAD Jan 28 01:22:49.534000 audit: BPF prog-id=124 op=LOAD Jan 28 01:22:49.534000 audit: BPF prog-id=78 op=UNLOAD Jan 28 01:22:49.534000 audit: BPF prog-id=125 op=LOAD Jan 28 01:22:49.534000 audit: BPF prog-id=126 op=LOAD Jan 28 01:22:49.534000 audit: BPF prog-id=79 op=UNLOAD Jan 28 01:22:49.534000 audit: BPF prog-id=80 op=UNLOAD Jan 28 01:22:49.540000 audit: BPF prog-id=127 op=LOAD Jan 28 01:22:49.540000 audit: BPF prog-id=75 op=UNLOAD Jan 28 01:22:49.540000 audit: BPF prog-id=128 op=LOAD Jan 28 01:22:49.540000 audit: BPF prog-id=129 op=LOAD Jan 28 01:22:49.540000 audit: BPF prog-id=76 op=UNLOAD Jan 28 01:22:49.540000 audit: BPF prog-id=77 op=UNLOAD Jan 28 01:22:49.540000 audit: BPF prog-id=130 op=LOAD Jan 28 01:22:49.540000 audit: BPF prog-id=73 op=UNLOAD Jan 28 01:22:49.548000 audit: BPF prog-id=131 op=LOAD Jan 28 01:22:49.548000 audit: BPF prog-id=66 op=UNLOAD Jan 28 01:22:49.563000 audit: BPF prog-id=132 op=LOAD Jan 28 01:22:49.563000 audit: BPF prog-id=74 op=UNLOAD Jan 28 01:22:51.075137 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:22:51.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:22:51.130444 (kubelet)[2995]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:22:51.922993 kubelet[2995]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:22:51.922993 kubelet[2995]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:22:51.922993 kubelet[2995]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:22:51.922993 kubelet[2995]: I0128 01:22:51.922964 2995 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:22:51.987895 kubelet[2995]: I0128 01:22:51.961208 2995 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 28 01:22:51.987895 kubelet[2995]: I0128 01:22:51.961243 2995 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:22:51.987895 kubelet[2995]: I0128 01:22:51.961626 2995 server.go:956] "Client rotation is on, will bootstrap in background" Jan 28 01:22:51.987895 kubelet[2995]: I0128 01:22:51.977160 2995 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 28 01:22:52.020115 kubelet[2995]: I0128 01:22:52.019934 2995 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:22:52.201269 kubelet[2995]: I0128 01:22:52.199904 2995 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 01:22:52.400957 kubelet[2995]: I0128 01:22:52.348345 2995 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 01:22:52.400957 kubelet[2995]: I0128 01:22:52.351686 2995 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:22:52.400957 kubelet[2995]: I0128 01:22:52.352211 2995 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 01:22:52.400957 kubelet[2995]: I0128 01:22:52.353917 2995 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:22:52.418186 kubelet[2995]: I0128 01:22:52.353931 2995 container_manager_linux.go:303] "Creating device plugin manager" Jan 28 01:22:52.418186 kubelet[2995]: I0128 01:22:52.353994 2995 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:22:52.418186 kubelet[2995]: I0128 01:22:52.355882 2995 kubelet.go:480] "Attempting to sync node with API server" Jan 28 01:22:52.418186 kubelet[2995]: I0128 01:22:52.356605 2995 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:22:52.418186 kubelet[2995]: I0128 01:22:52.359297 2995 kubelet.go:386] "Adding apiserver pod source" Jan 28 01:22:52.418186 kubelet[2995]: I0128 01:22:52.361268 2995 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:22:52.418186 kubelet[2995]: I0128 01:22:52.393159 2995 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 28 01:22:52.418186 kubelet[2995]: I0128 01:22:52.393850 2995 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 28 01:22:52.597229 kubelet[2995]: I0128 01:22:52.596990 2995 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 01:22:52.611428 kubelet[2995]: I0128 01:22:52.599612 2995 server.go:1289] "Started kubelet" Jan 28 01:22:52.611428 kubelet[2995]: I0128 01:22:52.606402 2995 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:22:52.619681 kubelet[2995]: I0128 01:22:52.618361 2995 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:22:52.754758 kubelet[2995]: I0128 01:22:52.754564 2995 server.go:317] "Adding debug handlers to kubelet server" Jan 28 01:22:52.756543 kubelet[2995]: I0128 01:22:52.756406 2995 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:22:52.760296 kubelet[2995]: I0128 01:22:52.757993 2995 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:22:52.764543 kubelet[2995]: I0128 01:22:52.761216 2995 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 01:22:52.764543 kubelet[2995]: I0128 01:22:52.761455 2995 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:22:52.764543 kubelet[2995]: I0128 01:22:52.761638 2995 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 01:22:52.764543 kubelet[2995]: I0128 01:22:52.761860 2995 reconciler.go:26] "Reconciler: start to sync state" Jan 28 01:22:52.782664 kubelet[2995]: I0128 01:22:52.781545 2995 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:22:52.839978 kubelet[2995]: I0128 01:22:52.839861 2995 factory.go:223] Registration of the containerd container factory successfully Jan 28 01:22:52.839978 kubelet[2995]: I0128 01:22:52.839904 2995 factory.go:223] Registration of the systemd container factory successfully Jan 28 01:22:52.854249 kubelet[2995]: E0128 01:22:52.851976 2995 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:22:52.888426 kubelet[2995]: I0128 01:22:52.888273 2995 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 28 01:22:53.054362 kubelet[2995]: I0128 01:22:53.049451 2995 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 28 01:22:53.061714 kubelet[2995]: I0128 01:22:53.057818 2995 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 28 01:22:53.061714 kubelet[2995]: I0128 01:22:53.057969 2995 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:22:53.061714 kubelet[2995]: I0128 01:22:53.057989 2995 kubelet.go:2436] "Starting kubelet main sync loop" Jan 28 01:22:53.061714 kubelet[2995]: E0128 01:22:53.058159 2995 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:22:53.160416 kubelet[2995]: E0128 01:22:53.160196 2995 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:22:53.394342 kubelet[2995]: I0128 01:22:53.393656 2995 apiserver.go:52] "Watching apiserver" Jan 28 01:22:53.402945 kubelet[2995]: E0128 01:22:53.401221 2995 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:22:53.694218 kubelet[2995]: I0128 01:22:53.692965 2995 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:22:53.694218 kubelet[2995]: I0128 01:22:53.693337 2995 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:22:53.694218 kubelet[2995]: I0128 01:22:53.693842 2995 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:22:53.703232 kubelet[2995]: I0128 01:22:53.703106 2995 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 01:22:53.703346 kubelet[2995]: I0128 01:22:53.703224 2995 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 01:22:53.706854 kubelet[2995]: I0128 01:22:53.705746 2995 policy_none.go:49] "None policy: Start" Jan 28 01:22:53.706854 kubelet[2995]: I0128 01:22:53.705878 2995 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 01:22:53.706854 kubelet[2995]: I0128 01:22:53.705902 2995 state_mem.go:35] "Initializing new in-memory state store" Jan 28 01:22:53.706854 kubelet[2995]: I0128 01:22:53.706155 2995 state_mem.go:75] "Updated machine memory state" Jan 28 01:22:53.805804 kubelet[2995]: E0128 01:22:53.805257 2995 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 28 01:22:53.810870 kubelet[2995]: E0128 01:22:53.806212 2995 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:22:53.822442 kubelet[2995]: I0128 01:22:53.822341 2995 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:22:53.822442 kubelet[2995]: I0128 01:22:53.822424 2995 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:22:53.828397 kubelet[2995]: I0128 01:22:53.828281 2995 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:22:53.892111 kubelet[2995]: E0128 01:22:53.891299 2995 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:22:54.040813 kubelet[2995]: I0128 01:22:54.040624 2995 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:22:54.214112 kubelet[2995]: I0128 01:22:54.213716 2995 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 28 01:22:54.214112 kubelet[2995]: I0128 01:22:54.213904 2995 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 01:22:54.215565 kubelet[2995]: I0128 01:22:54.215118 2995 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 01:22:54.231167 containerd[1624]: time="2026-01-28T01:22:54.228586712Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 01:22:54.231738 kubelet[2995]: I0128 01:22:54.229147 2995 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 01:22:54.696631 kubelet[2995]: I0128 01:22:54.690427 2995 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 01:22:54.730128 systemd[1]: Created slice kubepods-besteffort-pod0cf01a6f_f832_403c_a259_d32b3ed8656f.slice - libcontainer container kubepods-besteffort-pod0cf01a6f_f832_403c_a259_d32b3ed8656f.slice. Jan 28 01:22:54.822925 kubelet[2995]: I0128 01:22:54.813166 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0cf01a6f-f832-403c-a259-d32b3ed8656f-lib-modules\") pod \"kube-proxy-sjkhr\" (UID: \"0cf01a6f-f832-403c-a259-d32b3ed8656f\") " pod="kube-system/kube-proxy-sjkhr" Jan 28 01:22:54.822925 kubelet[2995]: I0128 01:22:54.813227 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm5sl\" (UniqueName: \"kubernetes.io/projected/0cf01a6f-f832-403c-a259-d32b3ed8656f-kube-api-access-mm5sl\") pod \"kube-proxy-sjkhr\" (UID: \"0cf01a6f-f832-403c-a259-d32b3ed8656f\") " pod="kube-system/kube-proxy-sjkhr" Jan 28 01:22:54.822925 kubelet[2995]: I0128 01:22:54.813262 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5864bb654a2cf064f58c4da03d860b2b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5864bb654a2cf064f58c4da03d860b2b\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:22:54.822925 kubelet[2995]: I0128 01:22:54.813294 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5864bb654a2cf064f58c4da03d860b2b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5864bb654a2cf064f58c4da03d860b2b\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:22:54.822925 kubelet[2995]: I0128 01:22:54.813335 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:22:54.832931 kubelet[2995]: I0128 01:22:54.813363 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:22:54.832931 kubelet[2995]: I0128 01:22:54.813390 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0cf01a6f-f832-403c-a259-d32b3ed8656f-xtables-lock\") pod \"kube-proxy-sjkhr\" (UID: \"0cf01a6f-f832-403c-a259-d32b3ed8656f\") " pod="kube-system/kube-proxy-sjkhr" Jan 28 01:22:54.832931 kubelet[2995]: I0128 01:22:54.813416 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5864bb654a2cf064f58c4da03d860b2b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5864bb654a2cf064f58c4da03d860b2b\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:22:54.832931 kubelet[2995]: I0128 01:22:54.813443 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:22:54.832931 kubelet[2995]: I0128 01:22:54.813553 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:22:54.833253 kubelet[2995]: I0128 01:22:54.813583 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:22:54.833253 kubelet[2995]: I0128 01:22:54.813613 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 28 01:22:54.833253 kubelet[2995]: I0128 01:22:54.813644 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0cf01a6f-f832-403c-a259-d32b3ed8656f-kube-proxy\") pod \"kube-proxy-sjkhr\" (UID: \"0cf01a6f-f832-403c-a259-d32b3ed8656f\") " pod="kube-system/kube-proxy-sjkhr" Jan 28 01:22:54.999266 kubelet[2995]: E0128 01:22:54.995829 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:54.999266 kubelet[2995]: E0128 01:22:54.997177 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:54.999266 kubelet[2995]: E0128 01:22:54.997646 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:55.160871 kubelet[2995]: E0128 01:22:55.160752 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:55.175719 containerd[1624]: time="2026-01-28T01:22:55.175326487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sjkhr,Uid:0cf01a6f-f832-403c-a259-d32b3ed8656f,Namespace:kube-system,Attempt:0,}" Jan 28 01:22:55.219949 kubelet[2995]: E0128 01:22:55.219905 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:55.224677 kubelet[2995]: E0128 01:22:55.224643 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:55.225940 kubelet[2995]: E0128 01:22:55.225397 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:55.631835 containerd[1624]: time="2026-01-28T01:22:55.629880569Z" level=info msg="connecting to shim d907afcdb91474de53729cd16de4ba304f11134f16961dcff87235338443d918" address="unix:///run/containerd/s/40794afcb49597e6e8aa63928875140a77878aa88b26c2e146b6097f11853837" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:22:56.064750 systemd[1]: Started cri-containerd-d907afcdb91474de53729cd16de4ba304f11134f16961dcff87235338443d918.scope - libcontainer container d907afcdb91474de53729cd16de4ba304f11134f16961dcff87235338443d918. Jan 28 01:22:56.275860 kubelet[2995]: E0128 01:22:56.260687 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:56.275860 kubelet[2995]: E0128 01:22:56.262170 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:56.402159 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 28 01:22:56.402310 kernel: audit: type=1334 audit(1769563376.383:462): prog-id=133 op=LOAD Jan 28 01:22:56.383000 audit: BPF prog-id=133 op=LOAD Jan 28 01:22:56.423977 kernel: audit: type=1334 audit(1769563376.402:463): prog-id=134 op=LOAD Jan 28 01:22:56.402000 audit: BPF prog-id=134 op=LOAD Jan 28 01:22:56.402000 audit[3061]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3050 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:56.402000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439303761666364623931343734646535333732396364313664653462 Jan 28 01:22:56.529161 kernel: audit: type=1300 audit(1769563376.402:463): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3050 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:56.529321 kernel: audit: type=1327 audit(1769563376.402:463): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439303761666364623931343734646535333732396364313664653462 Jan 28 01:22:56.529364 kernel: audit: type=1334 audit(1769563376.402:464): prog-id=134 op=UNLOAD Jan 28 01:22:56.402000 audit: BPF prog-id=134 op=UNLOAD Jan 28 01:22:56.402000 audit[3061]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3050 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:56.592591 kernel: audit: type=1300 audit(1769563376.402:464): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3050 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:56.593217 kernel: audit: type=1327 audit(1769563376.402:464): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439303761666364623931343734646535333732396364313664653462 Jan 28 01:22:56.402000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439303761666364623931343734646535333732396364313664653462 Jan 28 01:22:56.402000 audit: BPF prog-id=135 op=LOAD Jan 28 01:22:56.402000 audit[3061]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3050 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:56.726843 kernel: audit: type=1334 audit(1769563376.402:465): prog-id=135 op=LOAD Jan 28 01:22:56.726978 kernel: audit: type=1300 audit(1769563376.402:465): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3050 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:56.402000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439303761666364623931343734646535333732396364313664653462 Jan 28 01:22:56.780987 kernel: audit: type=1327 audit(1769563376.402:465): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439303761666364623931343734646535333732396364313664653462 Jan 28 01:22:56.402000 audit: BPF prog-id=136 op=LOAD Jan 28 01:22:56.402000 audit[3061]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3050 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:56.402000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439303761666364623931343734646535333732396364313664653462 Jan 28 01:22:56.402000 audit: BPF prog-id=136 op=UNLOAD Jan 28 01:22:56.402000 audit[3061]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3050 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:56.402000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439303761666364623931343734646535333732396364313664653462 Jan 28 01:22:56.402000 audit: BPF prog-id=135 op=UNLOAD Jan 28 01:22:56.402000 audit[3061]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3050 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:56.402000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439303761666364623931343734646535333732396364313664653462 Jan 28 01:22:56.403000 audit: BPF prog-id=137 op=LOAD Jan 28 01:22:56.403000 audit[3061]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3050 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:56.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439303761666364623931343734646535333732396364313664653462 Jan 28 01:22:56.880892 containerd[1624]: time="2026-01-28T01:22:56.880655562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sjkhr,Uid:0cf01a6f-f832-403c-a259-d32b3ed8656f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d907afcdb91474de53729cd16de4ba304f11134f16961dcff87235338443d918\"" Jan 28 01:22:56.901729 kubelet[2995]: E0128 01:22:56.891918 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:57.003938 containerd[1624]: time="2026-01-28T01:22:56.996235387Z" level=info msg="CreateContainer within sandbox \"d907afcdb91474de53729cd16de4ba304f11134f16961dcff87235338443d918\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 01:22:57.206850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1273999127.mount: Deactivated successfully. Jan 28 01:22:57.273184 containerd[1624]: time="2026-01-28T01:22:57.271777112Z" level=info msg="Container 2f3fe30b5a7a11877b018c9352e1dbe04e1446bcdf93f4329258310a6d900f22: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:22:57.310321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2409785269.mount: Deactivated successfully. Jan 28 01:22:57.449985 kubelet[2995]: E0128 01:22:57.449492 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:57.571843 containerd[1624]: time="2026-01-28T01:22:57.568205767Z" level=info msg="CreateContainer within sandbox \"d907afcdb91474de53729cd16de4ba304f11134f16961dcff87235338443d918\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2f3fe30b5a7a11877b018c9352e1dbe04e1446bcdf93f4329258310a6d900f22\"" Jan 28 01:22:57.630674 containerd[1624]: time="2026-01-28T01:22:57.625833345Z" level=info msg="StartContainer for \"2f3fe30b5a7a11877b018c9352e1dbe04e1446bcdf93f4329258310a6d900f22\"" Jan 28 01:22:57.699157 containerd[1624]: time="2026-01-28T01:22:57.680960336Z" level=info msg="connecting to shim 2f3fe30b5a7a11877b018c9352e1dbe04e1446bcdf93f4329258310a6d900f22" address="unix:///run/containerd/s/40794afcb49597e6e8aa63928875140a77878aa88b26c2e146b6097f11853837" protocol=ttrpc version=3 Jan 28 01:22:57.976961 systemd[1]: Started cri-containerd-2f3fe30b5a7a11877b018c9352e1dbe04e1446bcdf93f4329258310a6d900f22.scope - libcontainer container 2f3fe30b5a7a11877b018c9352e1dbe04e1446bcdf93f4329258310a6d900f22. Jan 28 01:22:58.355227 kubelet[2995]: E0128 01:22:58.344162 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:58.487000 audit: BPF prog-id=138 op=LOAD Jan 28 01:22:58.487000 audit[3087]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3050 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:58.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266336665333062356137613131383737623031386339333532653164 Jan 28 01:22:58.494000 audit: BPF prog-id=139 op=LOAD Jan 28 01:22:58.494000 audit[3087]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3050 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:58.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266336665333062356137613131383737623031386339333532653164 Jan 28 01:22:58.494000 audit: BPF prog-id=139 op=UNLOAD Jan 28 01:22:58.494000 audit[3087]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3050 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:58.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266336665333062356137613131383737623031386339333532653164 Jan 28 01:22:58.494000 audit: BPF prog-id=138 op=UNLOAD Jan 28 01:22:58.494000 audit[3087]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3050 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:58.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266336665333062356137613131383737623031386339333532653164 Jan 28 01:22:58.494000 audit: BPF prog-id=140 op=LOAD Jan 28 01:22:58.494000 audit[3087]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3050 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:22:58.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266336665333062356137613131383737623031386339333532653164 Jan 28 01:22:58.789193 containerd[1624]: time="2026-01-28T01:22:58.788549148Z" level=info msg="StartContainer for \"2f3fe30b5a7a11877b018c9352e1dbe04e1446bcdf93f4329258310a6d900f22\" returns successfully" Jan 28 01:22:59.386669 kubelet[2995]: E0128 01:22:59.384701 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:59.403309 systemd[1]: Created slice kubepods-besteffort-podc0d25caa_09d1_471b_a8a8_142c262a7655.slice - libcontainer container kubepods-besteffort-podc0d25caa_09d1_471b_a8a8_142c262a7655.slice. Jan 28 01:22:59.497090 kubelet[2995]: I0128 01:22:59.489158 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c0d25caa-09d1-471b-a8a8-142c262a7655-var-lib-calico\") pod \"tigera-operator-7dcd859c48-jgj7g\" (UID: \"c0d25caa-09d1-471b-a8a8-142c262a7655\") " pod="tigera-operator/tigera-operator-7dcd859c48-jgj7g" Jan 28 01:22:59.497090 kubelet[2995]: I0128 01:22:59.489201 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svjlq\" (UniqueName: \"kubernetes.io/projected/c0d25caa-09d1-471b-a8a8-142c262a7655-kube-api-access-svjlq\") pod \"tigera-operator-7dcd859c48-jgj7g\" (UID: \"c0d25caa-09d1-471b-a8a8-142c262a7655\") " pod="tigera-operator/tigera-operator-7dcd859c48-jgj7g" Jan 28 01:22:59.566856 kubelet[2995]: I0128 01:22:59.564638 2995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sjkhr" podStartSLOduration=5.564613226 podStartE2EDuration="5.564613226s" podCreationTimestamp="2026-01-28 01:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:22:59.564277953 +0000 UTC m=+8.233085608" watchObservedRunningTime="2026-01-28 01:22:59.564613226 +0000 UTC m=+8.233420871" Jan 28 01:22:59.865140 containerd[1624]: time="2026-01-28T01:22:59.864563512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jgj7g,Uid:c0d25caa-09d1-471b-a8a8-142c262a7655,Namespace:tigera-operator,Attempt:0,}" Jan 28 01:22:59.984752 containerd[1624]: time="2026-01-28T01:22:59.982677514Z" level=info msg="connecting to shim 12249ecd079ba8e7facdd83abe00a37c064b0f02369864baa816dd930c523b39" address="unix:///run/containerd/s/12496eaba2707aa40166886f94b5488c669502b0f4c99667591ddb7940015b77" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:23:00.236643 systemd[1]: Started cri-containerd-12249ecd079ba8e7facdd83abe00a37c064b0f02369864baa816dd930c523b39.scope - libcontainer container 12249ecd079ba8e7facdd83abe00a37c064b0f02369864baa816dd930c523b39. Jan 28 01:23:00.426503 kubelet[2995]: E0128 01:23:00.424811 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:23:00.494000 audit: BPF prog-id=141 op=LOAD Jan 28 01:23:00.496000 audit[3205]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3205 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:00.496000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdacccdfe0 a2=0 a3=7ffdacccdfcc items=0 ppid=3100 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:00.496000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 28 01:23:00.505000 audit[3207]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3207 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:00.505000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf4f34b20 a2=0 a3=7ffdf4f34b0c items=0 ppid=3100 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:00.505000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 28 01:23:00.512000 audit[3204]: NETFILTER_CFG table=mangle:56 family=2 entries=1 op=nft_register_chain pid=3204 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:00.512000 audit[3204]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc8c799100 a2=0 a3=7ffc8c7990ec items=0 ppid=3100 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:00.512000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 28 01:23:00.518000 audit: BPF prog-id=142 op=LOAD Jan 28 01:23:00.518000 audit[3171]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=3148 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:00.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132323439656364303739626138653766616364643833616265303061 Jan 28 01:23:00.525000 audit: BPF prog-id=142 op=UNLOAD Jan 28 01:23:00.525000 audit[3171]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3148 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:00.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132323439656364303739626138653766616364643833616265303061 Jan 28 01:23:00.538000 audit: BPF prog-id=143 op=LOAD Jan 28 01:23:00.538000 audit[3171]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3148 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:00.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132323439656364303739626138653766616364643833616265303061 Jan 28 01:23:00.538000 audit: BPF prog-id=144 op=LOAD Jan 28 01:23:00.538000 audit[3171]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3148 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:00.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132323439656364303739626138653766616364643833616265303061 Jan 28 01:23:00.538000 audit: BPF prog-id=144 op=UNLOAD Jan 28 01:23:00.538000 audit[3171]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3148 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:00.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132323439656364303739626138653766616364643833616265303061 Jan 28 01:23:00.538000 audit: BPF prog-id=143 op=UNLOAD Jan 28 01:23:00.538000 audit[3171]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3148 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:00.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132323439656364303739626138653766616364643833616265303061 Jan 28 01:23:00.543000 audit[3208]: NETFILTER_CFG table=filter:57 family=10 entries=1 op=nft_register_chain pid=3208 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:00.543000 audit[3208]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd2628b510 a2=0 a3=7ffd2628b4fc items=0 ppid=3100 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:00.543000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 28 01:23:00.538000 audit: BPF prog-id=145 op=LOAD Jan 28 01:23:00.538000 audit[3171]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=3148 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:00.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132323439656364303739626138653766616364643833616265303061 Jan 28 01:23:00.564000 audit[3210]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3210 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:00.564000 audit[3210]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe88103d80 a2=0 a3=7ffe88103d6c items=0 ppid=3100 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:00.564000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 28 01:23:00.612000 audit[3212]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3212 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:00.612000 audit[3212]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe3e7c5d00 a2=0 a3=7ffe3e7c5cec items=0 ppid=3100 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:00.612000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 28 01:23:00.653000 audit[3213]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:00.653000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffedeed71b0 a2=0 a3=7ffedeed719c items=0 ppid=3100 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:00.653000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 28 01:23:00.696000 audit[3215]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3215 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:00.696000 audit[3215]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff7db90620 a2=0 a3=7fff7db9060c items=0 ppid=3100 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:00.696000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 28 01:23:00.780000 audit[3225]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:00.780000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd1ec13fc0 a2=0 a3=7ffd1ec13fac items=0 ppid=3100 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:00.780000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 28 01:23:00.789000 audit[3226]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:00.789000 audit[3226]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff14437a80 a2=0 a3=7fff14437a6c items=0 ppid=3100 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:00.789000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 28 01:23:00.823662 containerd[1624]: time="2026-01-28T01:23:00.822744618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jgj7g,Uid:c0d25caa-09d1-471b-a8a8-142c262a7655,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"12249ecd079ba8e7facdd83abe00a37c064b0f02369864baa816dd930c523b39\"" Jan 28 01:23:00.845000 audit[3228]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:00.845000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff3aa0ef40 a2=0 a3=7fff3aa0ef2c items=0 ppid=3100 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:00.845000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 28 01:23:00.860637 containerd[1624]: time="2026-01-28T01:23:00.859652466Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 28 01:23:00.866000 audit[3229]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3229 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:00.866000 audit[3229]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff736853f0 a2=0 a3=7fff736853dc items=0 ppid=3100 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:00.866000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 28 01:23:01.024000 audit[3231]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:01.024000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc8c086190 a2=0 a3=7ffc8c08617c items=0 ppid=3100 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:01.024000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 28 01:23:01.074000 audit[3234]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3234 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:01.074000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe6a84b020 a2=0 a3=7ffe6a84b00c items=0 ppid=3100 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:01.074000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 28 01:23:01.083000 audit[3235]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3235 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:01.083000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3201a480 a2=0 a3=7fff3201a46c items=0 ppid=3100 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:01.083000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 28 01:23:01.115000 audit[3237]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3237 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:01.115000 audit[3237]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd8b3f6a60 a2=0 a3=7ffd8b3f6a4c items=0 ppid=3100 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:01.115000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 28 01:23:01.129000 audit[3238]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:01.129000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc27615430 a2=0 a3=7ffc2761541c items=0 ppid=3100 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:01.129000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 28 01:23:01.146000 audit[3240]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3240 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:01.146000 audit[3240]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcd70eed50 a2=0 a3=7ffcd70eed3c items=0 ppid=3100 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:01.146000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 01:23:01.169387 kubelet[2995]: E0128 01:23:01.132771 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:23:01.230000 audit[3243]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3243 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:01.230000 audit[3243]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff88acd1d0 a2=0 a3=7fff88acd1bc items=0 ppid=3100 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:01.230000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 01:23:01.293000 audit[3246]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3246 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:01.293000 audit[3246]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcaf7d62a0 a2=0 a3=7ffcaf7d628c items=0 ppid=3100 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:01.293000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 28 01:23:01.303000 audit[3247]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3247 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:01.303000 audit[3247]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd82c30820 a2=0 a3=7ffd82c3080c items=0 ppid=3100 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:01.303000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 28 01:23:01.337000 audit[3249]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3249 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:01.337000 audit[3249]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc751b67d0 a2=0 a3=7ffc751b67bc items=0 ppid=3100 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:01.337000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:23:01.409185 kernel: kauditd_printk_skb: 115 callbacks suppressed Jan 28 01:23:01.409358 kernel: audit: type=1325 audit(1769563381.382:505): table=nat:76 family=2 entries=1 op=nft_register_rule pid=3252 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:01.382000 audit[3252]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3252 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:01.382000 audit[3252]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffa2a256a0 a2=0 a3=7fffa2a2568c items=0 ppid=3100 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:01.466991 kubelet[2995]: E0128 01:23:01.463877 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:23:01.507708 kernel: audit: type=1300 audit(1769563381.382:505): arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffa2a256a0 a2=0 a3=7fffa2a2568c items=0 ppid=3100 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:01.508833 kernel: audit: type=1327 audit(1769563381.382:505): proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:23:01.382000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:23:01.392000 audit[3253]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3253 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:01.598807 kernel: audit: type=1325 audit(1769563381.392:506): table=nat:77 family=2 entries=1 op=nft_register_chain pid=3253 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:01.392000 audit[3253]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff651d6310 a2=0 a3=7fff651d62fc items=0 ppid=3100 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:01.392000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 28 01:23:01.668144 kernel: audit: type=1300 audit(1769563381.392:506): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff651d6310 a2=0 a3=7fff651d62fc items=0 ppid=3100 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:01.668231 kernel: audit: type=1327 audit(1769563381.392:506): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 28 01:23:01.421000 audit[3255]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3255 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:01.681914 kernel: audit: type=1325 audit(1769563381.421:507): table=nat:78 family=2 entries=1 op=nft_register_rule pid=3255 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:23:01.421000 audit[3255]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc51060d20 a2=0 a3=7ffc51060d0c items=0 ppid=3100 pid=3255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:01.421000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 28 01:23:01.778500 kernel: audit: type=1300 audit(1769563381.421:507): arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc51060d20 a2=0 a3=7ffc51060d0c items=0 ppid=3100 pid=3255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:01.778670 kernel: audit: type=1327 audit(1769563381.421:507): proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 28 01:23:01.826548 kubelet[2995]: E0128 01:23:01.825920 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:23:01.905129 kernel: audit: type=1325 audit(1769563381.871:508): table=filter:79 family=2 entries=8 op=nft_register_rule pid=3261 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:01.871000 audit[3261]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3261 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:01.871000 audit[3261]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff64b757c0 a2=0 a3=7fff64b757ac items=0 ppid=3100 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:01.871000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:01.977000 audit[3261]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3261 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:01.977000 audit[3261]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fff64b757c0 a2=0 a3=7fff64b757ac items=0 ppid=3100 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:01.977000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:02.001000 audit[3266]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3266 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:02.001000 audit[3266]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe8eca2a00 a2=0 a3=7ffe8eca29ec items=0 ppid=3100 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:02.001000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 28 01:23:02.095000 audit[3268]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3268 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:02.095000 audit[3268]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc01668fc0 a2=0 a3=7ffc01668fac items=0 ppid=3100 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:02.095000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 28 01:23:02.211000 audit[3271]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3271 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:02.211000 audit[3271]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc83dbb2b0 a2=0 a3=7ffc83dbb29c items=0 ppid=3100 pid=3271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:02.211000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 28 01:23:02.224000 audit[3272]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3272 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:02.224000 audit[3272]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe191f0150 a2=0 a3=7ffe191f013c items=0 ppid=3100 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:02.224000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 28 01:23:02.275000 audit[3274]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3274 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:02.275000 audit[3274]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcfaca3c90 a2=0 a3=7ffcfaca3c7c items=0 ppid=3100 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:02.275000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 28 01:23:02.317000 audit[3275]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3275 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:02.317000 audit[3275]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0ea5e9c0 a2=0 a3=7ffd0ea5e9ac items=0 ppid=3100 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:02.317000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 28 01:23:02.355000 audit[3278]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3278 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:02.355000 audit[3278]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe40437f10 a2=0 a3=7ffe40437efc items=0 ppid=3100 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:02.355000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 28 01:23:02.408000 audit[3284]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3284 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:02.408000 audit[3284]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd672c9620 a2=0 a3=7ffd672c960c items=0 ppid=3100 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:02.408000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 28 01:23:02.451000 audit[3285]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3285 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:02.451000 audit[3285]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe5f31440 a2=0 a3=7fffe5f3142c items=0 ppid=3100 pid=3285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:02.451000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 28 01:23:02.498859 kubelet[2995]: E0128 01:23:02.495980 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:23:02.513000 audit[3287]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3287 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:02.513000 audit[3287]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd0b6da5a0 a2=0 a3=7ffd0b6da58c items=0 ppid=3100 pid=3287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:02.513000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 28 01:23:02.532000 audit[3288]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3288 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:02.532000 audit[3288]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe27c80260 a2=0 a3=7ffe27c8024c items=0 ppid=3100 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:02.532000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 28 01:23:02.542249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3413573635.mount: Deactivated successfully. Jan 28 01:23:02.674000 audit[3290]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:02.674000 audit[3290]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc5343f730 a2=0 a3=7ffc5343f71c items=0 ppid=3100 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:02.674000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 01:23:02.843000 audit[3293]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3293 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:02.843000 audit[3293]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe3635ca90 a2=0 a3=7ffe3635ca7c items=0 ppid=3100 pid=3293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:02.843000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 28 01:23:02.902000 audit[3296]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3296 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:02.902000 audit[3296]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe9ab18ed0 a2=0 a3=7ffe9ab18ebc items=0 ppid=3100 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:02.902000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 28 01:23:02.925000 audit[3297]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3297 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:02.925000 audit[3297]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe8ea12160 a2=0 a3=7ffe8ea1214c items=0 ppid=3100 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:02.925000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 28 01:23:02.979000 audit[3299]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3299 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:02.979000 audit[3299]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff91488ce0 a2=0 a3=7fff91488ccc items=0 ppid=3100 pid=3299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:02.979000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:23:03.016000 audit[3302]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3302 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:03.016000 audit[3302]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd96591020 a2=0 a3=7ffd9659100c items=0 ppid=3100 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:03.016000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:23:03.025000 audit[3303]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3303 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:03.025000 audit[3303]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc31d17c30 a2=0 a3=7ffc31d17c1c items=0 ppid=3100 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:03.025000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 28 01:23:03.056000 audit[3305]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3305 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:03.056000 audit[3305]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff0f97ef00 a2=0 a3=7fff0f97eeec items=0 ppid=3100 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:03.056000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 28 01:23:03.074000 audit[3306]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3306 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:03.074000 audit[3306]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa6636eb0 a2=0 a3=7fffa6636e9c items=0 ppid=3100 pid=3306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:03.074000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 01:23:03.104000 audit[3308]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3308 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:03.104000 audit[3308]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcd39eaa20 a2=0 a3=7ffcd39eaa0c items=0 ppid=3100 pid=3308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:03.104000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 01:23:03.167000 audit[3311]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3311 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:23:03.167000 audit[3311]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc67ad4960 a2=0 a3=7ffc67ad494c items=0 ppid=3100 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:03.167000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 01:23:03.221000 audit[3313]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3313 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 28 01:23:03.221000 audit[3313]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe3cbf0040 a2=0 a3=7ffe3cbf002c items=0 ppid=3100 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:03.221000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:03.226000 audit[3313]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3313 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 28 01:23:03.226000 audit[3313]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe3cbf0040 a2=0 a3=7ffe3cbf002c items=0 ppid=3100 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:03.226000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:13.296085 containerd[1624]: time="2026-01-28T01:23:13.295139432Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:23:13.302186 containerd[1624]: time="2026-01-28T01:23:13.301956000Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 28 01:23:13.310178 containerd[1624]: time="2026-01-28T01:23:13.309569361Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:23:13.362437 containerd[1624]: time="2026-01-28T01:23:13.361695653Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:23:13.391517 containerd[1624]: time="2026-01-28T01:23:13.389821753Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 12.528632611s" Jan 28 01:23:13.391517 containerd[1624]: time="2026-01-28T01:23:13.390888816Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 28 01:23:13.420879 containerd[1624]: time="2026-01-28T01:23:13.420805123Z" level=info msg="CreateContainer within sandbox \"12249ecd079ba8e7facdd83abe00a37c064b0f02369864baa816dd930c523b39\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 28 01:23:13.506773 containerd[1624]: time="2026-01-28T01:23:13.505912338Z" level=info msg="Container 516a0f43159d67d16261409a30db2c8da990f4e4a5b172a2f39491a046095d15: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:23:13.514978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2979088952.mount: Deactivated successfully. Jan 28 01:23:13.565284 containerd[1624]: time="2026-01-28T01:23:13.563668553Z" level=info msg="CreateContainer within sandbox \"12249ecd079ba8e7facdd83abe00a37c064b0f02369864baa816dd930c523b39\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"516a0f43159d67d16261409a30db2c8da990f4e4a5b172a2f39491a046095d15\"" Jan 28 01:23:13.571235 containerd[1624]: time="2026-01-28T01:23:13.568204861Z" level=info msg="StartContainer for \"516a0f43159d67d16261409a30db2c8da990f4e4a5b172a2f39491a046095d15\"" Jan 28 01:23:13.581564 containerd[1624]: time="2026-01-28T01:23:13.580277294Z" level=info msg="connecting to shim 516a0f43159d67d16261409a30db2c8da990f4e4a5b172a2f39491a046095d15" address="unix:///run/containerd/s/12496eaba2707aa40166886f94b5488c669502b0f4c99667591ddb7940015b77" protocol=ttrpc version=3 Jan 28 01:23:13.847271 systemd[1]: Started cri-containerd-516a0f43159d67d16261409a30db2c8da990f4e4a5b172a2f39491a046095d15.scope - libcontainer container 516a0f43159d67d16261409a30db2c8da990f4e4a5b172a2f39491a046095d15. Jan 28 01:23:13.949000 audit: BPF prog-id=146 op=LOAD Jan 28 01:23:13.958553 kernel: kauditd_printk_skb: 77 callbacks suppressed Jan 28 01:23:13.958658 kernel: audit: type=1334 audit(1769563393.949:534): prog-id=146 op=LOAD Jan 28 01:23:13.964781 kernel: audit: type=1334 audit(1769563393.962:535): prog-id=147 op=LOAD Jan 28 01:23:13.962000 audit: BPF prog-id=147 op=LOAD Jan 28 01:23:13.962000 audit[3318]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000fc238 a2=98 a3=0 items=0 ppid=3148 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:13.995126 kernel: audit: type=1300 audit(1769563393.962:535): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000fc238 a2=98 a3=0 items=0 ppid=3148 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:13.999103 kernel: audit: type=1327 audit(1769563393.962:535): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531366130663433313539643637643136323631343039613330646232 Jan 28 01:23:13.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531366130663433313539643637643136323631343039613330646232 Jan 28 01:23:13.962000 audit: BPF prog-id=147 op=UNLOAD Jan 28 01:23:14.076580 kernel: audit: type=1334 audit(1769563393.962:536): prog-id=147 op=UNLOAD Jan 28 01:23:13.962000 audit[3318]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3148 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:14.117587 kernel: audit: type=1300 audit(1769563393.962:536): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3148 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:14.117749 kernel: audit: type=1327 audit(1769563393.962:536): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531366130663433313539643637643136323631343039613330646232 Jan 28 01:23:13.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531366130663433313539643637643136323631343039613330646232 Jan 28 01:23:13.962000 audit: BPF prog-id=148 op=LOAD Jan 28 01:23:13.962000 audit[3318]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000fc488 a2=98 a3=0 items=0 ppid=3148 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:14.207735 kernel: audit: type=1334 audit(1769563393.962:537): prog-id=148 op=LOAD Jan 28 01:23:14.211477 kernel: audit: type=1300 audit(1769563393.962:537): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000fc488 a2=98 a3=0 items=0 ppid=3148 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:13.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531366130663433313539643637643136323631343039613330646232 Jan 28 01:23:13.962000 audit: BPF prog-id=149 op=LOAD Jan 28 01:23:13.962000 audit[3318]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0000fc218 a2=98 a3=0 items=0 ppid=3148 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:13.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531366130663433313539643637643136323631343039613330646232 Jan 28 01:23:13.962000 audit: BPF prog-id=149 op=UNLOAD Jan 28 01:23:14.267193 kernel: audit: type=1327 audit(1769563393.962:537): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531366130663433313539643637643136323631343039613330646232 Jan 28 01:23:13.962000 audit[3318]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3148 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:13.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531366130663433313539643637643136323631343039613330646232 Jan 28 01:23:13.962000 audit: BPF prog-id=148 op=UNLOAD Jan 28 01:23:13.962000 audit[3318]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3148 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:13.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531366130663433313539643637643136323631343039613330646232 Jan 28 01:23:13.962000 audit: BPF prog-id=150 op=LOAD Jan 28 01:23:13.962000 audit[3318]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000fc6e8 a2=98 a3=0 items=0 ppid=3148 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:13.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531366130663433313539643637643136323631343039613330646232 Jan 28 01:23:14.413982 containerd[1624]: time="2026-01-28T01:23:14.410655496Z" level=info msg="StartContainer for \"516a0f43159d67d16261409a30db2c8da990f4e4a5b172a2f39491a046095d15\" returns successfully" Jan 28 01:23:26.816000 audit[1852]: USER_END pid=1852 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:23:26.833379 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 28 01:23:26.833467 kernel: audit: type=1106 audit(1769563406.816:542): pid=1852 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:23:26.817577 sudo[1852]: pam_unix(sudo:session): session closed for user root Jan 28 01:23:26.845805 sshd[1851]: Connection closed by 10.0.0.1 port 34506 Jan 28 01:23:26.861274 kernel: audit: type=1104 audit(1769563406.816:543): pid=1852 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:23:26.816000 audit[1852]: CRED_DISP pid=1852 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:23:26.854551 sshd-session[1847]: pam_unix(sshd:session): session closed for user core Jan 28 01:23:26.882583 kernel: audit: type=1106 audit(1769563406.864:544): pid=1847 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:23:26.864000 audit[1847]: USER_END pid=1847 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:23:26.880665 systemd[1]: sshd@8-10.0.0.61:22-10.0.0.1:34506.service: Deactivated successfully. Jan 28 01:23:26.864000 audit[1847]: CRED_DISP pid=1847 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:23:26.884953 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 01:23:26.886187 systemd[1]: session-10.scope: Consumed 19.127s CPU time, 212.7M memory peak. Jan 28 01:23:26.888722 systemd-logind[1590]: Session 10 logged out. Waiting for processes to exit. Jan 28 01:23:26.891203 systemd-logind[1590]: Removed session 10. Jan 28 01:23:26.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.61:22-10.0.0.1:34506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:26.907098 kernel: audit: type=1104 audit(1769563406.864:545): pid=1847 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:23:26.907181 kernel: audit: type=1131 audit(1769563406.879:546): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.61:22-10.0.0.1:34506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:34.926000 audit[3419]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3419 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:34.959190 kernel: audit: type=1325 audit(1769563414.926:547): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3419 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:34.926000 audit[3419]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc1576e800 a2=0 a3=7ffc1576e7ec items=0 ppid=3100 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:35.003655 kernel: audit: type=1300 audit(1769563414.926:547): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc1576e800 a2=0 a3=7ffc1576e7ec items=0 ppid=3100 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:35.003799 kernel: audit: type=1327 audit(1769563414.926:547): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:34.926000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:35.048804 kernel: audit: type=1325 audit(1769563414.999:548): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3419 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:34.999000 audit[3419]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3419 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:34.999000 audit[3419]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc1576e800 a2=0 a3=0 items=0 ppid=3100 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:35.087123 kernel: audit: type=1300 audit(1769563414.999:548): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc1576e800 a2=0 a3=0 items=0 ppid=3100 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:34.999000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:35.105121 kernel: audit: type=1327 audit(1769563414.999:548): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:35.159000 audit[3421]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3421 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:35.179218 kernel: audit: type=1325 audit(1769563415.159:549): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3421 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:35.159000 audit[3421]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe4f7ec190 a2=0 a3=7ffe4f7ec17c items=0 ppid=3100 pid=3421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:35.159000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:35.243479 kernel: audit: type=1300 audit(1769563415.159:549): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe4f7ec190 a2=0 a3=7ffe4f7ec17c items=0 ppid=3100 pid=3421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:35.243855 kernel: audit: type=1327 audit(1769563415.159:549): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:35.244000 audit[3421]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3421 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:35.244000 audit[3421]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe4f7ec190 a2=0 a3=0 items=0 ppid=3100 pid=3421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:35.244000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:35.271417 kernel: audit: type=1325 audit(1769563415.244:550): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3421 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:40.912215 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 28 01:23:40.912446 kernel: audit: type=1325 audit(1769563420.882:551): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3426 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:40.882000 audit[3426]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3426 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:40.882000 audit[3426]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe8e096500 a2=0 a3=7ffe8e0964ec items=0 ppid=3100 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:40.960205 kernel: audit: type=1300 audit(1769563420.882:551): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe8e096500 a2=0 a3=7ffe8e0964ec items=0 ppid=3100 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:40.960391 kernel: audit: type=1327 audit(1769563420.882:551): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:40.882000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:40.958000 audit[3426]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3426 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:41.020357 kernel: audit: type=1325 audit(1769563420.958:552): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3426 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:41.020418 kernel: audit: type=1300 audit(1769563420.958:552): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe8e096500 a2=0 a3=0 items=0 ppid=3100 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:40.958000 audit[3426]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe8e096500 a2=0 a3=0 items=0 ppid=3100 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:41.063716 kernel: audit: type=1327 audit(1769563420.958:552): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:40.958000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:41.114000 audit[3428]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3428 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:41.114000 audit[3428]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff62bd6aa0 a2=0 a3=7fff62bd6a8c items=0 ppid=3100 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:41.189597 kernel: audit: type=1325 audit(1769563421.114:553): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3428 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:41.189751 kernel: audit: type=1300 audit(1769563421.114:553): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff62bd6aa0 a2=0 a3=7fff62bd6a8c items=0 ppid=3100 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:41.189806 kernel: audit: type=1327 audit(1769563421.114:553): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:41.114000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:41.208000 audit[3428]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3428 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:41.242606 kernel: audit: type=1325 audit(1769563421.208:554): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3428 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:41.208000 audit[3428]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff62bd6aa0 a2=0 a3=0 items=0 ppid=3100 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:41.208000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:42.291000 audit[3430]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3430 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:42.291000 audit[3430]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe5034be90 a2=0 a3=7ffe5034be7c items=0 ppid=3100 pid=3430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:42.291000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:42.303000 audit[3430]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3430 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:42.303000 audit[3430]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe5034be90 a2=0 a3=0 items=0 ppid=3100 pid=3430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:42.303000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:48.057000 audit[3434]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3434 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:48.087108 kernel: kauditd_printk_skb: 8 callbacks suppressed Jan 28 01:23:48.087335 kernel: audit: type=1325 audit(1769563428.057:557): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3434 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:48.057000 audit[3434]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffea11ecc20 a2=0 a3=7ffea11ecc0c items=0 ppid=3100 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:48.185600 kernel: audit: type=1300 audit(1769563428.057:557): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffea11ecc20 a2=0 a3=7ffea11ecc0c items=0 ppid=3100 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:48.057000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:48.215949 kernel: audit: type=1327 audit(1769563428.057:557): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:48.238000 audit[3434]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3434 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:48.276918 kernel: audit: type=1325 audit(1769563428.238:558): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3434 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:48.238000 audit[3434]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffea11ecc20 a2=0 a3=0 items=0 ppid=3100 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:48.238000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:48.378109 kubelet[2995]: I0128 01:23:48.377674 2995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-jgj7g" podStartSLOduration=36.831628571 podStartE2EDuration="49.377649883s" podCreationTimestamp="2026-01-28 01:22:59 +0000 UTC" firstStartedPulling="2026-01-28 01:23:00.855220775 +0000 UTC m=+9.524028420" lastFinishedPulling="2026-01-28 01:23:13.401242086 +0000 UTC m=+22.070049732" observedRunningTime="2026-01-28 01:23:14.985275443 +0000 UTC m=+23.654083089" watchObservedRunningTime="2026-01-28 01:23:48.377649883 +0000 UTC m=+57.046457548" Jan 28 01:23:48.383651 kernel: audit: type=1300 audit(1769563428.238:558): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffea11ecc20 a2=0 a3=0 items=0 ppid=3100 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:48.383719 kernel: audit: type=1327 audit(1769563428.238:558): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:48.483110 kernel: audit: type=1325 audit(1769563428.444:559): table=filter:117 family=2 entries=22 op=nft_register_rule pid=3437 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:48.444000 audit[3437]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3437 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:48.483518 systemd[1]: Created slice kubepods-besteffort-podfcaa527d_dfa7_4962_822f_e43102f3ed44.slice - libcontainer container kubepods-besteffort-podfcaa527d_dfa7_4962_822f_e43102f3ed44.slice. Jan 28 01:23:48.578641 kernel: audit: type=1300 audit(1769563428.444:559): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd59eced10 a2=0 a3=7ffd59ececfc items=0 ppid=3100 pid=3437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:48.444000 audit[3437]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd59eced10 a2=0 a3=7ffd59ececfc items=0 ppid=3100 pid=3437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:48.444000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:48.624169 kernel: audit: type=1327 audit(1769563428.444:559): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:48.624324 kernel: audit: type=1325 audit(1769563428.482:560): table=nat:118 family=2 entries=12 op=nft_register_rule pid=3437 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:48.482000 audit[3437]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3437 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:48.624596 kubelet[2995]: I0128 01:23:48.616797 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcaa527d-dfa7-4962-822f-e43102f3ed44-tigera-ca-bundle\") pod \"calico-typha-66fbbd66cd-ntj2k\" (UID: \"fcaa527d-dfa7-4962-822f-e43102f3ed44\") " pod="calico-system/calico-typha-66fbbd66cd-ntj2k" Jan 28 01:23:48.624596 kubelet[2995]: I0128 01:23:48.616935 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fcaa527d-dfa7-4962-822f-e43102f3ed44-typha-certs\") pod \"calico-typha-66fbbd66cd-ntj2k\" (UID: \"fcaa527d-dfa7-4962-822f-e43102f3ed44\") " pod="calico-system/calico-typha-66fbbd66cd-ntj2k" Jan 28 01:23:48.482000 audit[3437]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd59eced10 a2=0 a3=0 items=0 ppid=3100 pid=3437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:48.482000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:48.652618 kubelet[2995]: I0128 01:23:48.616978 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fjc2\" (UniqueName: \"kubernetes.io/projected/fcaa527d-dfa7-4962-822f-e43102f3ed44-kube-api-access-9fjc2\") pod \"calico-typha-66fbbd66cd-ntj2k\" (UID: \"fcaa527d-dfa7-4962-822f-e43102f3ed44\") " pod="calico-system/calico-typha-66fbbd66cd-ntj2k" Jan 28 01:23:48.851575 systemd[1714]: Created slice background.slice - User Background Tasks Slice. Jan 28 01:23:48.864302 systemd[1714]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 28 01:23:49.136767 systemd[1714]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 28 01:23:49.298787 kubelet[2995]: E0128 01:23:49.293525 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:23:49.455165 containerd[1624]: time="2026-01-28T01:23:49.416174757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66fbbd66cd-ntj2k,Uid:fcaa527d-dfa7-4962-822f-e43102f3ed44,Namespace:calico-system,Attempt:0,}" Jan 28 01:23:49.729098 kubelet[2995]: I0128 01:23:49.721393 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5adadb43-2e88-4d85-9883-63ca7b7e0373-cni-bin-dir\") pod \"calico-node-46chs\" (UID: \"5adadb43-2e88-4d85-9883-63ca7b7e0373\") " pod="calico-system/calico-node-46chs" Jan 28 01:23:49.729098 kubelet[2995]: I0128 01:23:49.721453 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5adadb43-2e88-4d85-9883-63ca7b7e0373-cni-net-dir\") pod \"calico-node-46chs\" (UID: \"5adadb43-2e88-4d85-9883-63ca7b7e0373\") " pod="calico-system/calico-node-46chs" Jan 28 01:23:49.729098 kubelet[2995]: I0128 01:23:49.721482 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5adadb43-2e88-4d85-9883-63ca7b7e0373-flexvol-driver-host\") pod \"calico-node-46chs\" (UID: \"5adadb43-2e88-4d85-9883-63ca7b7e0373\") " pod="calico-system/calico-node-46chs" Jan 28 01:23:49.729098 kubelet[2995]: I0128 01:23:49.721513 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5adadb43-2e88-4d85-9883-63ca7b7e0373-policysync\") pod \"calico-node-46chs\" (UID: \"5adadb43-2e88-4d85-9883-63ca7b7e0373\") " pod="calico-system/calico-node-46chs" Jan 28 01:23:49.729098 kubelet[2995]: I0128 01:23:49.721561 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5adadb43-2e88-4d85-9883-63ca7b7e0373-var-run-calico\") pod \"calico-node-46chs\" (UID: \"5adadb43-2e88-4d85-9883-63ca7b7e0373\") " pod="calico-system/calico-node-46chs" Jan 28 01:23:49.729963 kubelet[2995]: I0128 01:23:49.721587 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5adadb43-2e88-4d85-9883-63ca7b7e0373-tigera-ca-bundle\") pod \"calico-node-46chs\" (UID: \"5adadb43-2e88-4d85-9883-63ca7b7e0373\") " pod="calico-system/calico-node-46chs" Jan 28 01:23:49.729963 kubelet[2995]: I0128 01:23:49.721611 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5adadb43-2e88-4d85-9883-63ca7b7e0373-var-lib-calico\") pod \"calico-node-46chs\" (UID: \"5adadb43-2e88-4d85-9883-63ca7b7e0373\") " pod="calico-system/calico-node-46chs" Jan 28 01:23:49.729963 kubelet[2995]: I0128 01:23:49.721637 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5adadb43-2e88-4d85-9883-63ca7b7e0373-xtables-lock\") pod \"calico-node-46chs\" (UID: \"5adadb43-2e88-4d85-9883-63ca7b7e0373\") " pod="calico-system/calico-node-46chs" Jan 28 01:23:49.729963 kubelet[2995]: I0128 01:23:49.721670 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5adadb43-2e88-4d85-9883-63ca7b7e0373-node-certs\") pod \"calico-node-46chs\" (UID: \"5adadb43-2e88-4d85-9883-63ca7b7e0373\") " pod="calico-system/calico-node-46chs" Jan 28 01:23:49.729963 kubelet[2995]: I0128 01:23:49.721693 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljdgt\" (UniqueName: \"kubernetes.io/projected/5adadb43-2e88-4d85-9883-63ca7b7e0373-kube-api-access-ljdgt\") pod \"calico-node-46chs\" (UID: \"5adadb43-2e88-4d85-9883-63ca7b7e0373\") " pod="calico-system/calico-node-46chs" Jan 28 01:23:49.730321 kubelet[2995]: I0128 01:23:49.721717 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5adadb43-2e88-4d85-9883-63ca7b7e0373-cni-log-dir\") pod \"calico-node-46chs\" (UID: \"5adadb43-2e88-4d85-9883-63ca7b7e0373\") " pod="calico-system/calico-node-46chs" Jan 28 01:23:49.730321 kubelet[2995]: I0128 01:23:49.721745 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5adadb43-2e88-4d85-9883-63ca7b7e0373-lib-modules\") pod \"calico-node-46chs\" (UID: \"5adadb43-2e88-4d85-9883-63ca7b7e0373\") " pod="calico-system/calico-node-46chs" Jan 28 01:23:49.792955 systemd[1]: Created slice kubepods-besteffort-pod5adadb43_2e88_4d85_9883_63ca7b7e0373.slice - libcontainer container kubepods-besteffort-pod5adadb43_2e88_4d85_9883_63ca7b7e0373.slice. Jan 28 01:23:49.805000 audit[3447]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=3447 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:49.805000 audit[3447]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd9d6ce400 a2=0 a3=7ffd9d6ce3ec items=0 ppid=3100 pid=3447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:49.805000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:49.835000 audit[3447]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=3447 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:23:49.835000 audit[3447]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd9d6ce400 a2=0 a3=0 items=0 ppid=3100 pid=3447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:49.835000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:23:49.860365 containerd[1624]: time="2026-01-28T01:23:49.858832483Z" level=info msg="connecting to shim b1e294b9cab2dfd134181e85da8c01ccfb64eed1b566222f7cb5f26a846ff1f6" address="unix:///run/containerd/s/30d1928ad59a5a6bf90c18c0d94a4bd760b3ed04567e4e5d5de14e58b1aa5edf" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:23:49.906046 kubelet[2995]: E0128 01:23:49.905682 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:49.906046 kubelet[2995]: W0128 01:23:49.905712 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:49.906046 kubelet[2995]: E0128 01:23:49.905738 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:49.945123 kubelet[2995]: E0128 01:23:49.932916 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:49.945123 kubelet[2995]: W0128 01:23:49.932954 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:49.945123 kubelet[2995]: E0128 01:23:49.943372 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:49.999816 kubelet[2995]: E0128 01:23:49.999531 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:23:50.031734 kubelet[2995]: E0128 01:23:50.031539 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.031734 kubelet[2995]: W0128 01:23:50.031615 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.031734 kubelet[2995]: E0128 01:23:50.031652 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.040374 kubelet[2995]: E0128 01:23:50.039990 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.040374 kubelet[2995]: W0128 01:23:50.040372 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.040589 kubelet[2995]: E0128 01:23:50.040404 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.058364 kubelet[2995]: E0128 01:23:50.047989 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.058364 kubelet[2995]: W0128 01:23:50.048121 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.058364 kubelet[2995]: E0128 01:23:50.048156 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.058364 kubelet[2995]: E0128 01:23:50.048648 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.058364 kubelet[2995]: W0128 01:23:50.048662 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.058364 kubelet[2995]: E0128 01:23:50.048683 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.058364 kubelet[2995]: E0128 01:23:50.049371 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.058364 kubelet[2995]: W0128 01:23:50.049387 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.058364 kubelet[2995]: E0128 01:23:50.049403 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.058812 kubelet[2995]: I0128 01:23:50.049448 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/845c6024-31b8-4f74-be49-c76c18f222f2-kubelet-dir\") pod \"csi-node-driver-kgc2v\" (UID: \"845c6024-31b8-4f74-be49-c76c18f222f2\") " pod="calico-system/csi-node-driver-kgc2v" Jan 28 01:23:50.058812 kubelet[2995]: E0128 01:23:50.049917 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.058812 kubelet[2995]: W0128 01:23:50.049933 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.058812 kubelet[2995]: E0128 01:23:50.049947 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.058812 kubelet[2995]: E0128 01:23:50.051893 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.058812 kubelet[2995]: W0128 01:23:50.051907 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.058812 kubelet[2995]: E0128 01:23:50.051923 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.058812 kubelet[2995]: E0128 01:23:50.055667 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.058812 kubelet[2995]: W0128 01:23:50.055687 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.059333 kubelet[2995]: E0128 01:23:50.055708 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.059333 kubelet[2995]: E0128 01:23:50.057786 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.059333 kubelet[2995]: W0128 01:23:50.057800 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.059333 kubelet[2995]: E0128 01:23:50.057820 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.060176 kubelet[2995]: E0128 01:23:50.060083 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.060176 kubelet[2995]: W0128 01:23:50.060136 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.060176 kubelet[2995]: E0128 01:23:50.060155 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.064398 kubelet[2995]: E0128 01:23:50.061830 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.064398 kubelet[2995]: W0128 01:23:50.061892 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.064398 kubelet[2995]: E0128 01:23:50.061913 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.065553 kubelet[2995]: E0128 01:23:50.065481 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.065553 kubelet[2995]: W0128 01:23:50.065538 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.065679 kubelet[2995]: E0128 01:23:50.065560 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.072071 kubelet[2995]: E0128 01:23:50.068595 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.072071 kubelet[2995]: W0128 01:23:50.068662 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.072071 kubelet[2995]: E0128 01:23:50.068683 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.072071 kubelet[2995]: E0128 01:23:50.068943 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.072071 kubelet[2995]: W0128 01:23:50.068953 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.072071 kubelet[2995]: E0128 01:23:50.068964 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.076322 kubelet[2995]: E0128 01:23:50.073994 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.076322 kubelet[2995]: W0128 01:23:50.074111 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.076322 kubelet[2995]: E0128 01:23:50.074131 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.076322 kubelet[2995]: E0128 01:23:50.074482 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.076322 kubelet[2995]: W0128 01:23:50.074493 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.076322 kubelet[2995]: E0128 01:23:50.074508 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.076322 kubelet[2995]: E0128 01:23:50.074737 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.076322 kubelet[2995]: W0128 01:23:50.074747 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.076322 kubelet[2995]: E0128 01:23:50.074759 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.076322 kubelet[2995]: E0128 01:23:50.074975 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.076666 kubelet[2995]: W0128 01:23:50.074986 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.076666 kubelet[2995]: E0128 01:23:50.075088 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.076666 kubelet[2995]: E0128 01:23:50.075610 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.076666 kubelet[2995]: W0128 01:23:50.075622 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.076666 kubelet[2995]: E0128 01:23:50.075634 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.076666 kubelet[2995]: E0128 01:23:50.075878 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.076666 kubelet[2995]: W0128 01:23:50.075887 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.076666 kubelet[2995]: E0128 01:23:50.075898 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.079338 kubelet[2995]: E0128 01:23:50.077596 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.079338 kubelet[2995]: W0128 01:23:50.077661 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.079338 kubelet[2995]: E0128 01:23:50.077676 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.079338 kubelet[2995]: E0128 01:23:50.077905 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.079338 kubelet[2995]: W0128 01:23:50.077915 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.079338 kubelet[2995]: E0128 01:23:50.077926 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.096095 kubelet[2995]: E0128 01:23:50.090077 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.096095 kubelet[2995]: W0128 01:23:50.090112 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.096095 kubelet[2995]: E0128 01:23:50.090141 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.127486 kubelet[2995]: E0128 01:23:50.125720 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:23:50.139749 containerd[1624]: time="2026-01-28T01:23:50.137632283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-46chs,Uid:5adadb43-2e88-4d85-9883-63ca7b7e0373,Namespace:calico-system,Attempt:0,}" Jan 28 01:23:50.154102 kubelet[2995]: E0128 01:23:50.153992 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.157139 kubelet[2995]: W0128 01:23:50.154625 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.157139 kubelet[2995]: E0128 01:23:50.154750 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.157139 kubelet[2995]: I0128 01:23:50.154790 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/845c6024-31b8-4f74-be49-c76c18f222f2-socket-dir\") pod \"csi-node-driver-kgc2v\" (UID: \"845c6024-31b8-4f74-be49-c76c18f222f2\") " pod="calico-system/csi-node-driver-kgc2v" Jan 28 01:23:50.158579 systemd[1]: Started cri-containerd-b1e294b9cab2dfd134181e85da8c01ccfb64eed1b566222f7cb5f26a846ff1f6.scope - libcontainer container b1e294b9cab2dfd134181e85da8c01ccfb64eed1b566222f7cb5f26a846ff1f6. Jan 28 01:23:50.177415 kubelet[2995]: E0128 01:23:50.172784 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.177415 kubelet[2995]: W0128 01:23:50.172862 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.177415 kubelet[2995]: E0128 01:23:50.172893 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.177415 kubelet[2995]: I0128 01:23:50.172933 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/845c6024-31b8-4f74-be49-c76c18f222f2-varrun\") pod \"csi-node-driver-kgc2v\" (UID: \"845c6024-31b8-4f74-be49-c76c18f222f2\") " pod="calico-system/csi-node-driver-kgc2v" Jan 28 01:23:50.191710 kubelet[2995]: E0128 01:23:50.181585 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.191710 kubelet[2995]: W0128 01:23:50.190132 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.191710 kubelet[2995]: E0128 01:23:50.190193 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.207702 kubelet[2995]: I0128 01:23:50.197426 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/845c6024-31b8-4f74-be49-c76c18f222f2-registration-dir\") pod \"csi-node-driver-kgc2v\" (UID: \"845c6024-31b8-4f74-be49-c76c18f222f2\") " pod="calico-system/csi-node-driver-kgc2v" Jan 28 01:23:50.246952 kubelet[2995]: E0128 01:23:50.201209 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.246952 kubelet[2995]: W0128 01:23:50.246763 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.258405 kubelet[2995]: E0128 01:23:50.247529 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.260326 kubelet[2995]: E0128 01:23:50.259679 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.260326 kubelet[2995]: W0128 01:23:50.259715 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.260326 kubelet[2995]: E0128 01:23:50.259752 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.265623 kubelet[2995]: E0128 01:23:50.265414 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.265623 kubelet[2995]: W0128 01:23:50.265440 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.265623 kubelet[2995]: E0128 01:23:50.265470 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.267527 kubelet[2995]: E0128 01:23:50.267417 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.267527 kubelet[2995]: W0128 01:23:50.267433 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.267527 kubelet[2995]: E0128 01:23:50.267449 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.277902 kubelet[2995]: E0128 01:23:50.277490 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.277902 kubelet[2995]: W0128 01:23:50.277575 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.279128 kubelet[2995]: E0128 01:23:50.278775 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.297571 kubelet[2995]: E0128 01:23:50.297350 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.297571 kubelet[2995]: W0128 01:23:50.297383 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.297571 kubelet[2995]: E0128 01:23:50.297416 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.314838 kubelet[2995]: E0128 01:23:50.314506 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.314838 kubelet[2995]: W0128 01:23:50.314540 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.314838 kubelet[2995]: E0128 01:23:50.314570 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.314838 kubelet[2995]: I0128 01:23:50.314611 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l6c5\" (UniqueName: \"kubernetes.io/projected/845c6024-31b8-4f74-be49-c76c18f222f2-kube-api-access-4l6c5\") pod \"csi-node-driver-kgc2v\" (UID: \"845c6024-31b8-4f74-be49-c76c18f222f2\") " pod="calico-system/csi-node-driver-kgc2v" Jan 28 01:23:50.322423 kubelet[2995]: E0128 01:23:50.322324 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.327116 kubelet[2995]: W0128 01:23:50.322847 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.327116 kubelet[2995]: E0128 01:23:50.322882 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.332759 kubelet[2995]: E0128 01:23:50.332658 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.332759 kubelet[2995]: W0128 01:23:50.333110 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.332759 kubelet[2995]: E0128 01:23:50.333148 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.345913 kubelet[2995]: E0128 01:23:50.342969 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.345913 kubelet[2995]: W0128 01:23:50.345639 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.345913 kubelet[2995]: E0128 01:23:50.345673 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.369063 kubelet[2995]: E0128 01:23:50.368540 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.369063 kubelet[2995]: W0128 01:23:50.368688 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.369063 kubelet[2995]: E0128 01:23:50.368819 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.408479 kubelet[2995]: E0128 01:23:50.408407 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.408479 kubelet[2995]: W0128 01:23:50.408446 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.408479 kubelet[2995]: E0128 01:23:50.408475 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.413310 kubelet[2995]: E0128 01:23:50.413172 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.413310 kubelet[2995]: W0128 01:23:50.413202 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.413310 kubelet[2995]: E0128 01:23:50.413295 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.425111 kubelet[2995]: E0128 01:23:50.424788 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.425111 kubelet[2995]: W0128 01:23:50.424816 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.425111 kubelet[2995]: E0128 01:23:50.424841 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.430086 kubelet[2995]: E0128 01:23:50.429634 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.430086 kubelet[2995]: W0128 01:23:50.429661 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.430086 kubelet[2995]: E0128 01:23:50.429687 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.441727 kubelet[2995]: E0128 01:23:50.441456 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.441727 kubelet[2995]: W0128 01:23:50.441492 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.441727 kubelet[2995]: E0128 01:23:50.441527 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.442509 kubelet[2995]: E0128 01:23:50.442492 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.442695 kubelet[2995]: W0128 01:23:50.442603 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.442695 kubelet[2995]: E0128 01:23:50.442624 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.448318 kubelet[2995]: E0128 01:23:50.448281 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.450345 kubelet[2995]: W0128 01:23:50.448882 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.450345 kubelet[2995]: E0128 01:23:50.450170 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.458908 kubelet[2995]: E0128 01:23:50.457421 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.458908 kubelet[2995]: W0128 01:23:50.457448 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.458908 kubelet[2995]: E0128 01:23:50.457475 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.459485 kubelet[2995]: E0128 01:23:50.459194 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.459485 kubelet[2995]: W0128 01:23:50.459208 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.459485 kubelet[2995]: E0128 01:23:50.459279 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.463092 kubelet[2995]: E0128 01:23:50.462186 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.463092 kubelet[2995]: W0128 01:23:50.462208 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.463092 kubelet[2995]: E0128 01:23:50.462279 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.467736 kubelet[2995]: E0128 01:23:50.463698 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.467736 kubelet[2995]: W0128 01:23:50.463713 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.467736 kubelet[2995]: E0128 01:23:50.463729 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.467841 containerd[1624]: time="2026-01-28T01:23:50.466994105Z" level=info msg="connecting to shim f51c35e06fd03b0092d6c6d4c751a427c78b1d590f68d3abde526efe0dfc1374" address="unix:///run/containerd/s/89b44edf8a2cbecd70ae4545d12b2340686ca0e79aeb70444693e5c66af3a95d" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:23:50.484394 kubelet[2995]: E0128 01:23:50.470159 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.484394 kubelet[2995]: W0128 01:23:50.470321 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.484394 kubelet[2995]: E0128 01:23:50.470346 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.496769 kubelet[2995]: E0128 01:23:50.495274 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.496769 kubelet[2995]: W0128 01:23:50.495313 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.496769 kubelet[2995]: E0128 01:23:50.495346 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.508960 kubelet[2995]: E0128 01:23:50.506630 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.508960 kubelet[2995]: W0128 01:23:50.506702 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.508960 kubelet[2995]: E0128 01:23:50.506734 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.510626 kubelet[2995]: E0128 01:23:50.510491 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.510626 kubelet[2995]: W0128 01:23:50.510561 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.510626 kubelet[2995]: E0128 01:23:50.510591 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.526097 kubelet[2995]: E0128 01:23:50.515737 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.526097 kubelet[2995]: W0128 01:23:50.515800 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.526097 kubelet[2995]: E0128 01:23:50.515826 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.534750 kubelet[2995]: E0128 01:23:50.531816 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.543380 kubelet[2995]: W0128 01:23:50.540456 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.543380 kubelet[2995]: E0128 01:23:50.540631 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.549795 kubelet[2995]: E0128 01:23:50.549639 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.549795 kubelet[2995]: W0128 01:23:50.549674 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.549795 kubelet[2995]: E0128 01:23:50.549704 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.565659 kubelet[2995]: E0128 01:23:50.565558 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.565659 kubelet[2995]: W0128 01:23:50.565640 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.565887 kubelet[2995]: E0128 01:23:50.565675 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.566465 kubelet[2995]: E0128 01:23:50.566308 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.566465 kubelet[2995]: W0128 01:23:50.566332 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.566465 kubelet[2995]: E0128 01:23:50.566346 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.567414 kubelet[2995]: E0128 01:23:50.567165 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.567414 kubelet[2995]: W0128 01:23:50.567181 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.567414 kubelet[2995]: E0128 01:23:50.567196 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.569580 kubelet[2995]: E0128 01:23:50.569562 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.569676 kubelet[2995]: W0128 01:23:50.569660 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.569760 kubelet[2995]: E0128 01:23:50.569734 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.571279 kubelet[2995]: E0128 01:23:50.571207 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.571414 kubelet[2995]: W0128 01:23:50.571364 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.571414 kubelet[2995]: E0128 01:23:50.571386 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.636000 audit: BPF prog-id=151 op=LOAD Jan 28 01:23:50.640000 audit: BPF prog-id=152 op=LOAD Jan 28 01:23:50.640000 audit[3465]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3454 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:50.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231653239346239636162326466643133343138316538356461386330 Jan 28 01:23:50.640000 audit: BPF prog-id=152 op=UNLOAD Jan 28 01:23:50.640000 audit[3465]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3454 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:50.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231653239346239636162326466643133343138316538356461386330 Jan 28 01:23:50.640000 audit: BPF prog-id=153 op=LOAD Jan 28 01:23:50.640000 audit[3465]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3454 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:50.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231653239346239636162326466643133343138316538356461386330 Jan 28 01:23:50.640000 audit: BPF prog-id=154 op=LOAD Jan 28 01:23:50.640000 audit[3465]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3454 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:50.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231653239346239636162326466643133343138316538356461386330 Jan 28 01:23:50.640000 audit: BPF prog-id=154 op=UNLOAD Jan 28 01:23:50.640000 audit[3465]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3454 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:50.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231653239346239636162326466643133343138316538356461386330 Jan 28 01:23:50.640000 audit: BPF prog-id=153 op=UNLOAD Jan 28 01:23:50.640000 audit[3465]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3454 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:50.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231653239346239636162326466643133343138316538356461386330 Jan 28 01:23:50.640000 audit: BPF prog-id=155 op=LOAD Jan 28 01:23:50.640000 audit[3465]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3454 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:50.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231653239346239636162326466643133343138316538356461386330 Jan 28 01:23:50.645471 kubelet[2995]: E0128 01:23:50.644431 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:23:50.645471 kubelet[2995]: W0128 01:23:50.644456 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:23:50.645471 kubelet[2995]: E0128 01:23:50.644485 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:23:50.700531 systemd[1]: Started cri-containerd-f51c35e06fd03b0092d6c6d4c751a427c78b1d590f68d3abde526efe0dfc1374.scope - libcontainer container f51c35e06fd03b0092d6c6d4c751a427c78b1d590f68d3abde526efe0dfc1374. Jan 28 01:23:50.768000 audit: BPF prog-id=156 op=LOAD Jan 28 01:23:50.783000 audit: BPF prog-id=157 op=LOAD Jan 28 01:23:50.783000 audit[3575]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3545 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:50.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635316333356530366664303362303039326436633664346337353161 Jan 28 01:23:50.783000 audit: BPF prog-id=157 op=UNLOAD Jan 28 01:23:50.783000 audit[3575]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3545 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:50.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635316333356530366664303362303039326436633664346337353161 Jan 28 01:23:50.784000 audit: BPF prog-id=158 op=LOAD Jan 28 01:23:50.784000 audit[3575]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3545 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:50.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635316333356530366664303362303039326436633664346337353161 Jan 28 01:23:50.784000 audit: BPF prog-id=159 op=LOAD Jan 28 01:23:50.784000 audit[3575]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3545 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:50.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635316333356530366664303362303039326436633664346337353161 Jan 28 01:23:50.784000 audit: BPF prog-id=159 op=UNLOAD Jan 28 01:23:50.784000 audit[3575]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3545 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:50.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635316333356530366664303362303039326436633664346337353161 Jan 28 01:23:50.784000 audit: BPF prog-id=158 op=UNLOAD Jan 28 01:23:50.784000 audit[3575]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3545 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:50.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635316333356530366664303362303039326436633664346337353161 Jan 28 01:23:50.784000 audit: BPF prog-id=160 op=LOAD Jan 28 01:23:50.784000 audit[3575]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3545 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:50.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635316333356530366664303362303039326436633664346337353161 Jan 28 01:23:50.926887 containerd[1624]: time="2026-01-28T01:23:50.926749897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66fbbd66cd-ntj2k,Uid:fcaa527d-dfa7-4962-822f-e43102f3ed44,Namespace:calico-system,Attempt:0,} returns sandbox id \"b1e294b9cab2dfd134181e85da8c01ccfb64eed1b566222f7cb5f26a846ff1f6\"" Jan 28 01:23:50.930129 containerd[1624]: time="2026-01-28T01:23:50.929962835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-46chs,Uid:5adadb43-2e88-4d85-9883-63ca7b7e0373,Namespace:calico-system,Attempt:0,} returns sandbox id \"f51c35e06fd03b0092d6c6d4c751a427c78b1d590f68d3abde526efe0dfc1374\"" Jan 28 01:23:50.937431 kubelet[2995]: E0128 01:23:50.937352 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:23:50.959865 kubelet[2995]: E0128 01:23:50.953661 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:23:50.960151 containerd[1624]: time="2026-01-28T01:23:50.958568132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 28 01:23:52.065782 kubelet[2995]: E0128 01:23:52.063722 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:23:52.231780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount107723502.mount: Deactivated successfully. Jan 28 01:23:54.059602 kubelet[2995]: E0128 01:23:54.058892 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:23:56.070462 kubelet[2995]: E0128 01:23:56.068147 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:23:58.076699 kubelet[2995]: E0128 01:23:58.076472 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:23:59.094134 containerd[1624]: time="2026-01-28T01:23:59.093945596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:23:59.107158 containerd[1624]: time="2026-01-28T01:23:59.104127370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 28 01:23:59.118137 containerd[1624]: time="2026-01-28T01:23:59.115574657Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:23:59.121934 containerd[1624]: time="2026-01-28T01:23:59.121420214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:23:59.122697 containerd[1624]: time="2026-01-28T01:23:59.122300953Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 8.163686905s" Jan 28 01:23:59.122697 containerd[1624]: time="2026-01-28T01:23:59.122341588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 28 01:23:59.132823 containerd[1624]: time="2026-01-28T01:23:59.130603801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 28 01:23:59.422346 containerd[1624]: time="2026-01-28T01:23:59.421373002Z" level=info msg="CreateContainer within sandbox \"b1e294b9cab2dfd134181e85da8c01ccfb64eed1b566222f7cb5f26a846ff1f6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 28 01:23:59.559369 containerd[1624]: time="2026-01-28T01:23:59.558762637Z" level=info msg="Container 6b072554be0c2642ddac2b83d7e5d4ab98bb4d2ea073283197f1b314d79b2389: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:23:59.609959 containerd[1624]: time="2026-01-28T01:23:59.606766902Z" level=info msg="CreateContainer within sandbox \"b1e294b9cab2dfd134181e85da8c01ccfb64eed1b566222f7cb5f26a846ff1f6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6b072554be0c2642ddac2b83d7e5d4ab98bb4d2ea073283197f1b314d79b2389\"" Jan 28 01:23:59.611687 containerd[1624]: time="2026-01-28T01:23:59.611124119Z" level=info msg="StartContainer for \"6b072554be0c2642ddac2b83d7e5d4ab98bb4d2ea073283197f1b314d79b2389\"" Jan 28 01:23:59.612839 containerd[1624]: time="2026-01-28T01:23:59.612802381Z" level=info msg="connecting to shim 6b072554be0c2642ddac2b83d7e5d4ab98bb4d2ea073283197f1b314d79b2389" address="unix:///run/containerd/s/30d1928ad59a5a6bf90c18c0d94a4bd760b3ed04567e4e5d5de14e58b1aa5edf" protocol=ttrpc version=3 Jan 28 01:23:59.960402 systemd[1]: Started cri-containerd-6b072554be0c2642ddac2b83d7e5d4ab98bb4d2ea073283197f1b314d79b2389.scope - libcontainer container 6b072554be0c2642ddac2b83d7e5d4ab98bb4d2ea073283197f1b314d79b2389. Jan 28 01:24:00.063399 kernel: kauditd_printk_skb: 52 callbacks suppressed Jan 28 01:24:00.063568 kernel: audit: type=1334 audit(1769563440.049:579): prog-id=161 op=LOAD Jan 28 01:24:00.049000 audit: BPF prog-id=161 op=LOAD Jan 28 01:24:00.063732 kubelet[2995]: E0128 01:24:00.058584 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:24:00.055000 audit: BPF prog-id=162 op=LOAD Jan 28 01:24:00.055000 audit[3625]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3454 pid=3625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:00.120490 kernel: audit: type=1334 audit(1769563440.055:580): prog-id=162 op=LOAD Jan 28 01:24:00.120631 kernel: audit: type=1300 audit(1769563440.055:580): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3454 pid=3625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:00.120682 kernel: audit: type=1327 audit(1769563440.055:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662303732353534626530633236343264646163326238336437653564 Jan 28 01:24:00.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662303732353534626530633236343264646163326238336437653564 Jan 28 01:24:00.055000 audit: BPF prog-id=162 op=UNLOAD Jan 28 01:24:00.155290 kernel: audit: type=1334 audit(1769563440.055:581): prog-id=162 op=UNLOAD Jan 28 01:24:00.155475 kernel: audit: type=1300 audit(1769563440.055:581): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3454 pid=3625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:00.055000 audit[3625]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3454 pid=3625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:00.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662303732353534626530633236343264646163326238336437653564 Jan 28 01:24:00.189098 kernel: audit: type=1327 audit(1769563440.055:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662303732353534626530633236343264646163326238336437653564 Jan 28 01:24:00.189318 kernel: audit: type=1334 audit(1769563440.055:582): prog-id=163 op=LOAD Jan 28 01:24:00.055000 audit: BPF prog-id=163 op=LOAD Jan 28 01:24:00.194242 kernel: audit: type=1300 audit(1769563440.055:582): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3454 pid=3625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:00.055000 audit[3625]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3454 pid=3625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:00.227166 kernel: audit: type=1327 audit(1769563440.055:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662303732353534626530633236343264646163326238336437653564 Jan 28 01:24:00.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662303732353534626530633236343264646163326238336437653564 Jan 28 01:24:00.055000 audit: BPF prog-id=164 op=LOAD Jan 28 01:24:00.055000 audit[3625]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3454 pid=3625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:00.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662303732353534626530633236343264646163326238336437653564 Jan 28 01:24:00.055000 audit: BPF prog-id=164 op=UNLOAD Jan 28 01:24:00.055000 audit[3625]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3454 pid=3625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:00.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662303732353534626530633236343264646163326238336437653564 Jan 28 01:24:00.055000 audit: BPF prog-id=163 op=UNLOAD Jan 28 01:24:00.055000 audit[3625]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3454 pid=3625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:00.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662303732353534626530633236343264646163326238336437653564 Jan 28 01:24:00.055000 audit: BPF prog-id=165 op=LOAD Jan 28 01:24:00.055000 audit[3625]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3454 pid=3625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:00.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662303732353534626530633236343264646163326238336437653564 Jan 28 01:24:00.342868 containerd[1624]: time="2026-01-28T01:24:00.342522711Z" level=info msg="StartContainer for \"6b072554be0c2642ddac2b83d7e5d4ab98bb4d2ea073283197f1b314d79b2389\" returns successfully" Jan 28 01:24:00.758291 containerd[1624]: time="2026-01-28T01:24:00.758079225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:00.766536 containerd[1624]: time="2026-01-28T01:24:00.763640379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 28 01:24:00.773501 containerd[1624]: time="2026-01-28T01:24:00.772109376Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:00.780251 containerd[1624]: time="2026-01-28T01:24:00.779968790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:00.782442 containerd[1624]: time="2026-01-28T01:24:00.781553779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.650856813s" Jan 28 01:24:00.782442 containerd[1624]: time="2026-01-28T01:24:00.781605676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 28 01:24:00.804103 containerd[1624]: time="2026-01-28T01:24:00.803390489Z" level=info msg="CreateContainer within sandbox \"f51c35e06fd03b0092d6c6d4c751a427c78b1d590f68d3abde526efe0dfc1374\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 28 01:24:00.852135 containerd[1624]: time="2026-01-28T01:24:00.850751139Z" level=info msg="Container cd5a69140ab54e712ccf832ebab05a9a79dd750e218ccfd63282e2c8030af874: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:24:00.883851 containerd[1624]: time="2026-01-28T01:24:00.883730350Z" level=info msg="CreateContainer within sandbox \"f51c35e06fd03b0092d6c6d4c751a427c78b1d590f68d3abde526efe0dfc1374\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cd5a69140ab54e712ccf832ebab05a9a79dd750e218ccfd63282e2c8030af874\"" Jan 28 01:24:00.889621 containerd[1624]: time="2026-01-28T01:24:00.889172079Z" level=info msg="StartContainer for \"cd5a69140ab54e712ccf832ebab05a9a79dd750e218ccfd63282e2c8030af874\"" Jan 28 01:24:00.893477 containerd[1624]: time="2026-01-28T01:24:00.892382662Z" level=info msg="connecting to shim cd5a69140ab54e712ccf832ebab05a9a79dd750e218ccfd63282e2c8030af874" address="unix:///run/containerd/s/89b44edf8a2cbecd70ae4545d12b2340686ca0e79aeb70444693e5c66af3a95d" protocol=ttrpc version=3 Jan 28 01:24:00.949275 kubelet[2995]: E0128 01:24:00.949164 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:00.992184 kubelet[2995]: E0128 01:24:00.991878 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:00.992184 kubelet[2995]: W0128 01:24:00.992302 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:00.992582 kubelet[2995]: E0128 01:24:00.992462 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:00.996451 kubelet[2995]: E0128 01:24:00.995572 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:00.996451 kubelet[2995]: W0128 01:24:00.995594 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:00.996451 kubelet[2995]: E0128 01:24:00.995616 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:00.996960 kubelet[2995]: E0128 01:24:00.996677 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:00.996960 kubelet[2995]: W0128 01:24:00.996694 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:00.996960 kubelet[2995]: E0128 01:24:00.996711 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.001282 kubelet[2995]: E0128 01:24:01.001128 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.001282 kubelet[2995]: W0128 01:24:01.001187 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.001282 kubelet[2995]: E0128 01:24:01.001262 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.010506 kubelet[2995]: E0128 01:24:01.003607 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.010506 kubelet[2995]: W0128 01:24:01.003661 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.010506 kubelet[2995]: E0128 01:24:01.003680 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.010506 kubelet[2995]: E0128 01:24:01.004346 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.010506 kubelet[2995]: W0128 01:24:01.004489 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.010506 kubelet[2995]: E0128 01:24:01.004505 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.010506 kubelet[2995]: E0128 01:24:01.006640 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.010506 kubelet[2995]: W0128 01:24:01.006654 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.010506 kubelet[2995]: E0128 01:24:01.006669 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.010506 kubelet[2995]: E0128 01:24:01.009170 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.011435 kubelet[2995]: W0128 01:24:01.009333 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.011435 kubelet[2995]: E0128 01:24:01.009462 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.013614 kubelet[2995]: E0128 01:24:01.013335 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.013614 kubelet[2995]: W0128 01:24:01.013474 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.014613 kubelet[2995]: E0128 01:24:01.013774 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.016608 kubelet[2995]: E0128 01:24:01.016175 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.016608 kubelet[2995]: W0128 01:24:01.016482 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.017084 kubelet[2995]: E0128 01:24:01.016937 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.020859 kubelet[2995]: E0128 01:24:01.020586 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.020859 kubelet[2995]: W0128 01:24:01.020608 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.020859 kubelet[2995]: E0128 01:24:01.020633 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.027101 kubelet[2995]: E0128 01:24:01.024746 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.027101 kubelet[2995]: W0128 01:24:01.025037 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.027101 kubelet[2995]: E0128 01:24:01.025065 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.031705 kubelet[2995]: E0128 01:24:01.030701 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.031705 kubelet[2995]: W0128 01:24:01.030729 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.031705 kubelet[2995]: E0128 01:24:01.030760 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.033443 kubelet[2995]: E0128 01:24:01.033124 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.036357 kubelet[2995]: W0128 01:24:01.033813 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.039079 kubelet[2995]: E0128 01:24:01.036842 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.039711 kubelet[2995]: E0128 01:24:01.039621 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.040488 systemd[1]: Started cri-containerd-cd5a69140ab54e712ccf832ebab05a9a79dd750e218ccfd63282e2c8030af874.scope - libcontainer container cd5a69140ab54e712ccf832ebab05a9a79dd750e218ccfd63282e2c8030af874. Jan 28 01:24:01.042765 kubelet[2995]: W0128 01:24:01.041246 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.042765 kubelet[2995]: E0128 01:24:01.041503 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.045626 kubelet[2995]: E0128 01:24:01.045499 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.046862 kubelet[2995]: W0128 01:24:01.046577 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.046862 kubelet[2995]: E0128 01:24:01.046608 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.050292 kubelet[2995]: E0128 01:24:01.050243 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.050559 kubelet[2995]: W0128 01:24:01.050437 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.050810 kubelet[2995]: E0128 01:24:01.050696 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.056095 kubelet[2995]: E0128 01:24:01.054637 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.056420 kubelet[2995]: W0128 01:24:01.056316 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.056784 kubelet[2995]: E0128 01:24:01.056676 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.071628 kubelet[2995]: E0128 01:24:01.071463 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.071628 kubelet[2995]: W0128 01:24:01.071494 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.071628 kubelet[2995]: E0128 01:24:01.071629 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.078363 kubelet[2995]: E0128 01:24:01.078302 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.078363 kubelet[2995]: W0128 01:24:01.078329 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.078363 kubelet[2995]: E0128 01:24:01.078356 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.081591 kubelet[2995]: E0128 01:24:01.080761 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.083799 kubelet[2995]: W0128 01:24:01.081501 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.084314 kubelet[2995]: E0128 01:24:01.084108 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.089498 kubelet[2995]: E0128 01:24:01.089361 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.091437 kubelet[2995]: W0128 01:24:01.091136 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.091721 kubelet[2995]: E0128 01:24:01.091600 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.096157 kubelet[2995]: E0128 01:24:01.096128 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.096527 kubelet[2995]: W0128 01:24:01.096389 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.101620 kubelet[2995]: E0128 01:24:01.101514 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.114956 kubelet[2995]: E0128 01:24:01.114770 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.116544 kubelet[2995]: W0128 01:24:01.115573 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.116544 kubelet[2995]: E0128 01:24:01.115738 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.119939 kubelet[2995]: E0128 01:24:01.119844 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.120340 kubelet[2995]: W0128 01:24:01.120139 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.120340 kubelet[2995]: E0128 01:24:01.120173 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.125854 kubelet[2995]: E0128 01:24:01.125701 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.126845 kubelet[2995]: W0128 01:24:01.126352 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.126845 kubelet[2995]: E0128 01:24:01.126646 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.134185 kubelet[2995]: E0128 01:24:01.133806 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.137308 kubelet[2995]: W0128 01:24:01.136599 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.137308 kubelet[2995]: E0128 01:24:01.136725 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.139099 kubelet[2995]: E0128 01:24:01.138930 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.139099 kubelet[2995]: W0128 01:24:01.138955 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.139099 kubelet[2995]: E0128 01:24:01.138980 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.140283 kubelet[2995]: E0128 01:24:01.139684 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.140283 kubelet[2995]: W0128 01:24:01.139698 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.140283 kubelet[2995]: E0128 01:24:01.139718 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.141497 kubelet[2995]: E0128 01:24:01.140468 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.141497 kubelet[2995]: W0128 01:24:01.141484 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.141605 kubelet[2995]: E0128 01:24:01.141507 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.144096 kubelet[2995]: E0128 01:24:01.143419 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.144096 kubelet[2995]: W0128 01:24:01.143479 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.144096 kubelet[2995]: E0128 01:24:01.143500 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.146923 kubelet[2995]: E0128 01:24:01.146367 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.146923 kubelet[2995]: W0128 01:24:01.146390 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.146923 kubelet[2995]: E0128 01:24:01.146411 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.152361 kubelet[2995]: E0128 01:24:01.149425 2995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:01.152361 kubelet[2995]: W0128 01:24:01.149446 2995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:01.152361 kubelet[2995]: E0128 01:24:01.149467 2995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:01.165108 kubelet[2995]: I0128 01:24:01.159561 2995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66fbbd66cd-ntj2k" podStartSLOduration=4.987984477 podStartE2EDuration="13.159380135s" podCreationTimestamp="2026-01-28 01:23:48 +0000 UTC" firstStartedPulling="2026-01-28 01:23:50.958203574 +0000 UTC m=+59.627011219" lastFinishedPulling="2026-01-28 01:23:59.129599232 +0000 UTC m=+67.798406877" observedRunningTime="2026-01-28 01:24:01.152886482 +0000 UTC m=+69.821694127" watchObservedRunningTime="2026-01-28 01:24:01.159380135 +0000 UTC m=+69.828187790" Jan 28 01:24:01.309000 audit[3724]: NETFILTER_CFG table=filter:121 family=2 entries=21 op=nft_register_rule pid=3724 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:24:01.309000 audit[3724]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd2bd0a410 a2=0 a3=7ffd2bd0a3fc items=0 ppid=3100 pid=3724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:01.309000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:24:01.327000 audit: BPF prog-id=166 op=LOAD Jan 28 01:24:01.327000 audit[3664]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=3545 pid=3664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:01.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364356136393134306162353465373132636366383332656261623035 Jan 28 01:24:01.327000 audit: BPF prog-id=167 op=LOAD Jan 28 01:24:01.327000 audit[3664]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=3545 pid=3664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:01.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364356136393134306162353465373132636366383332656261623035 Jan 28 01:24:01.327000 audit: BPF prog-id=167 op=UNLOAD Jan 28 01:24:01.327000 audit[3664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3545 pid=3664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:01.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364356136393134306162353465373132636366383332656261623035 Jan 28 01:24:01.327000 audit: BPF prog-id=166 op=UNLOAD Jan 28 01:24:01.327000 audit[3664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3545 pid=3664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:01.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364356136393134306162353465373132636366383332656261623035 Jan 28 01:24:01.327000 audit: BPF prog-id=168 op=LOAD Jan 28 01:24:01.327000 audit[3664]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=3545 pid=3664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:01.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364356136393134306162353465373132636366383332656261623035 Jan 28 01:24:01.329000 audit[3724]: NETFILTER_CFG table=nat:122 family=2 entries=19 op=nft_register_chain pid=3724 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:24:01.329000 audit[3724]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd2bd0a410 a2=0 a3=7ffd2bd0a3fc items=0 ppid=3100 pid=3724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:01.329000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:24:01.564134 containerd[1624]: time="2026-01-28T01:24:01.562528782Z" level=info msg="StartContainer for \"cd5a69140ab54e712ccf832ebab05a9a79dd750e218ccfd63282e2c8030af874\" returns successfully" Jan 28 01:24:01.582825 systemd[1]: cri-containerd-cd5a69140ab54e712ccf832ebab05a9a79dd750e218ccfd63282e2c8030af874.scope: Deactivated successfully. Jan 28 01:24:01.592000 audit: BPF prog-id=168 op=UNLOAD Jan 28 01:24:01.602543 containerd[1624]: time="2026-01-28T01:24:01.602362924Z" level=info msg="received container exit event container_id:\"cd5a69140ab54e712ccf832ebab05a9a79dd750e218ccfd63282e2c8030af874\" id:\"cd5a69140ab54e712ccf832ebab05a9a79dd750e218ccfd63282e2c8030af874\" pid:3699 exited_at:{seconds:1769563441 nanos:601388538}" Jan 28 01:24:01.690515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd5a69140ab54e712ccf832ebab05a9a79dd750e218ccfd63282e2c8030af874-rootfs.mount: Deactivated successfully. Jan 28 01:24:01.978545 kubelet[2995]: E0128 01:24:01.974925 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:01.978545 kubelet[2995]: E0128 01:24:01.976159 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:02.058561 kubelet[2995]: E0128 01:24:02.058499 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:24:03.072076 kubelet[2995]: E0128 01:24:03.062688 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:03.072076 kubelet[2995]: E0128 01:24:03.069609 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:03.104313 containerd[1624]: time="2026-01-28T01:24:03.102695657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 28 01:24:04.059775 kubelet[2995]: E0128 01:24:04.059713 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:24:06.066763 kubelet[2995]: E0128 01:24:06.066513 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:24:06.072961 kubelet[2995]: E0128 01:24:06.072892 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:08.059142 kubelet[2995]: E0128 01:24:08.058645 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:24:10.067345 kubelet[2995]: E0128 01:24:10.066554 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:24:12.060221 kubelet[2995]: E0128 01:24:12.058506 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:24:14.066467 kubelet[2995]: E0128 01:24:14.058902 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:24:15.512844 kubelet[2995]: E0128 01:24:15.512735 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:18.225862 kubelet[2995]: E0128 01:24:18.224564 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.796s" Jan 28 01:24:18.313049 kubelet[2995]: E0128 01:24:18.310573 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:24:19.890816 kubelet[2995]: E0128 01:24:19.890557 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.666s" Jan 28 01:24:19.913366 kubelet[2995]: E0128 01:24:19.912929 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:24:21.069380 kubelet[2995]: E0128 01:24:21.068311 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:22.076696 kubelet[2995]: E0128 01:24:22.076565 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:24:23.072388 kubelet[2995]: E0128 01:24:23.071899 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:24.066602 kubelet[2995]: E0128 01:24:24.066476 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:24:26.062282 kubelet[2995]: E0128 01:24:26.062222 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:24:28.080319 kubelet[2995]: E0128 01:24:28.077679 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:24:30.066645 kubelet[2995]: E0128 01:24:30.066308 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:24:30.165537 containerd[1624]: time="2026-01-28T01:24:30.163928595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:30.169284 containerd[1624]: time="2026-01-28T01:24:30.168325309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 28 01:24:30.179263 containerd[1624]: time="2026-01-28T01:24:30.179130494Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:30.189889 containerd[1624]: time="2026-01-28T01:24:30.188370973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:30.189889 containerd[1624]: time="2026-01-28T01:24:30.189525273Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 27.086770215s" Jan 28 01:24:30.189889 containerd[1624]: time="2026-01-28T01:24:30.189555589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 28 01:24:30.265423 containerd[1624]: time="2026-01-28T01:24:30.265317589Z" level=info msg="CreateContainer within sandbox \"f51c35e06fd03b0092d6c6d4c751a427c78b1d590f68d3abde526efe0dfc1374\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 01:24:30.354912 containerd[1624]: time="2026-01-28T01:24:30.338587654Z" level=info msg="Container 40dd44f1f1515f368f13e558b5a4019b9ae5d1e0cd260fed5d6fccc77bea016f: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:24:30.377319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4245226267.mount: Deactivated successfully. Jan 28 01:24:30.416529 containerd[1624]: time="2026-01-28T01:24:30.413249527Z" level=info msg="CreateContainer within sandbox \"f51c35e06fd03b0092d6c6d4c751a427c78b1d590f68d3abde526efe0dfc1374\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"40dd44f1f1515f368f13e558b5a4019b9ae5d1e0cd260fed5d6fccc77bea016f\"" Jan 28 01:24:30.416529 containerd[1624]: time="2026-01-28T01:24:30.414550318Z" level=info msg="StartContainer for \"40dd44f1f1515f368f13e558b5a4019b9ae5d1e0cd260fed5d6fccc77bea016f\"" Jan 28 01:24:30.420521 containerd[1624]: time="2026-01-28T01:24:30.420274974Z" level=info msg="connecting to shim 40dd44f1f1515f368f13e558b5a4019b9ae5d1e0cd260fed5d6fccc77bea016f" address="unix:///run/containerd/s/89b44edf8a2cbecd70ae4545d12b2340686ca0e79aeb70444693e5c66af3a95d" protocol=ttrpc version=3 Jan 28 01:24:30.592706 systemd[1]: Started cri-containerd-40dd44f1f1515f368f13e558b5a4019b9ae5d1e0cd260fed5d6fccc77bea016f.scope - libcontainer container 40dd44f1f1515f368f13e558b5a4019b9ae5d1e0cd260fed5d6fccc77bea016f. Jan 28 01:24:30.850125 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 28 01:24:30.850406 kernel: audit: type=1334 audit(1769563470.826:595): prog-id=169 op=LOAD Jan 28 01:24:30.826000 audit: BPF prog-id=169 op=LOAD Jan 28 01:24:30.852453 kernel: audit: type=1300 audit(1769563470.826:595): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3545 pid=3766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:30.826000 audit[3766]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3545 pid=3766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:30.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430646434346631663135313566333638663133653535386235613430 Jan 28 01:24:30.920428 kernel: audit: type=1327 audit(1769563470.826:595): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430646434346631663135313566333638663133653535386235613430 Jan 28 01:24:30.920587 kernel: audit: type=1334 audit(1769563470.826:596): prog-id=170 op=LOAD Jan 28 01:24:30.826000 audit: BPF prog-id=170 op=LOAD Jan 28 01:24:30.963220 kernel: audit: type=1300 audit(1769563470.826:596): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3545 pid=3766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:30.826000 audit[3766]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3545 pid=3766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:30.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430646434346631663135313566333638663133653535386235613430 Jan 28 01:24:30.826000 audit: BPF prog-id=170 op=UNLOAD Jan 28 01:24:31.098216 kernel: audit: type=1327 audit(1769563470.826:596): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430646434346631663135313566333638663133653535386235613430 Jan 28 01:24:31.098373 kernel: audit: type=1334 audit(1769563470.826:597): prog-id=170 op=UNLOAD Jan 28 01:24:30.826000 audit[3766]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3545 pid=3766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:31.168884 kernel: audit: type=1300 audit(1769563470.826:597): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3545 pid=3766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:30.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430646434346631663135313566333638663133653535386235613430 Jan 28 01:24:31.217620 containerd[1624]: time="2026-01-28T01:24:31.217533647Z" level=info msg="StartContainer for \"40dd44f1f1515f368f13e558b5a4019b9ae5d1e0cd260fed5d6fccc77bea016f\" returns successfully" Jan 28 01:24:31.259308 kernel: audit: type=1327 audit(1769563470.826:597): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430646434346631663135313566333638663133653535386235613430 Jan 28 01:24:31.259476 kernel: audit: type=1334 audit(1769563470.826:598): prog-id=169 op=UNLOAD Jan 28 01:24:30.826000 audit: BPF prog-id=169 op=UNLOAD Jan 28 01:24:30.826000 audit[3766]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3545 pid=3766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:30.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430646434346631663135313566333638663133653535386235613430 Jan 28 01:24:30.828000 audit: BPF prog-id=171 op=LOAD Jan 28 01:24:30.828000 audit[3766]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3545 pid=3766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:30.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430646434346631663135313566333638663133653535386235613430 Jan 28 01:24:32.066620 kubelet[2995]: E0128 01:24:32.060785 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:24:32.280977 kubelet[2995]: E0128 01:24:32.280497 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:33.321252 kubelet[2995]: E0128 01:24:33.310707 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:34.066471 kubelet[2995]: E0128 01:24:34.066290 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:24:36.080731 kubelet[2995]: E0128 01:24:36.080210 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:24:38.061446 kubelet[2995]: E0128 01:24:38.060614 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:24:38.505868 systemd[1]: cri-containerd-40dd44f1f1515f368f13e558b5a4019b9ae5d1e0cd260fed5d6fccc77bea016f.scope: Deactivated successfully. Jan 28 01:24:38.607258 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 28 01:24:38.608275 kernel: audit: type=1334 audit(1769563478.548:600): prog-id=171 op=UNLOAD Jan 28 01:24:38.548000 audit: BPF prog-id=171 op=UNLOAD Jan 28 01:24:38.506639 systemd[1]: cri-containerd-40dd44f1f1515f368f13e558b5a4019b9ae5d1e0cd260fed5d6fccc77bea016f.scope: Consumed 2.685s CPU time, 180.8M memory peak, 3.7M read from disk, 171.3M written to disk. Jan 28 01:24:38.659095 containerd[1624]: time="2026-01-28T01:24:38.658563591Z" level=info msg="received container exit event container_id:\"40dd44f1f1515f368f13e558b5a4019b9ae5d1e0cd260fed5d6fccc77bea016f\" id:\"40dd44f1f1515f368f13e558b5a4019b9ae5d1e0cd260fed5d6fccc77bea016f\" pid:3779 exited_at:{seconds:1769563478 nanos:651445580}" Jan 28 01:24:39.184385 kubelet[2995]: I0128 01:24:39.182668 2995 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 28 01:24:39.724551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40dd44f1f1515f368f13e558b5a4019b9ae5d1e0cd260fed5d6fccc77bea016f-rootfs.mount: Deactivated successfully. Jan 28 01:24:39.966960 kubelet[2995]: I0128 01:24:39.966863 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwzkp\" (UniqueName: \"kubernetes.io/projected/63a937f8-f218-45d1-87c6-b75ad5fcad55-kube-api-access-bwzkp\") pod \"calico-apiserver-59889c77b-9msjb\" (UID: \"63a937f8-f218-45d1-87c6-b75ad5fcad55\") " pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" Jan 28 01:24:39.967794 kubelet[2995]: I0128 01:24:39.967770 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24bl6\" (UniqueName: \"kubernetes.io/projected/77b9f804-ee0a-4a45-895f-6d2585222c51-kube-api-access-24bl6\") pod \"whisker-6f6f76b88b-kbkfp\" (UID: \"77b9f804-ee0a-4a45-895f-6d2585222c51\") " pod="calico-system/whisker-6f6f76b88b-kbkfp" Jan 28 01:24:39.970943 kubelet[2995]: I0128 01:24:39.970779 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77b9f804-ee0a-4a45-895f-6d2585222c51-whisker-ca-bundle\") pod \"whisker-6f6f76b88b-kbkfp\" (UID: \"77b9f804-ee0a-4a45-895f-6d2585222c51\") " pod="calico-system/whisker-6f6f76b88b-kbkfp" Jan 28 01:24:39.978534 kubelet[2995]: I0128 01:24:39.978272 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/63a937f8-f218-45d1-87c6-b75ad5fcad55-calico-apiserver-certs\") pod \"calico-apiserver-59889c77b-9msjb\" (UID: \"63a937f8-f218-45d1-87c6-b75ad5fcad55\") " pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" Jan 28 01:24:39.978977 kubelet[2995]: I0128 01:24:39.978856 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/77b9f804-ee0a-4a45-895f-6d2585222c51-whisker-backend-key-pair\") pod \"whisker-6f6f76b88b-kbkfp\" (UID: \"77b9f804-ee0a-4a45-895f-6d2585222c51\") " pod="calico-system/whisker-6f6f76b88b-kbkfp" Jan 28 01:24:39.992547 systemd[1]: Created slice kubepods-besteffort-pod77b9f804_ee0a_4a45_895f_6d2585222c51.slice - libcontainer container kubepods-besteffort-pod77b9f804_ee0a_4a45_895f_6d2585222c51.slice. Jan 28 01:24:40.085555 kubelet[2995]: I0128 01:24:40.085318 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg9bv\" (UniqueName: \"kubernetes.io/projected/acadd2db-d3cd-417f-93e7-6a9e249c8d3d-kube-api-access-rg9bv\") pod \"coredns-674b8bbfcf-fh6hq\" (UID: \"acadd2db-d3cd-417f-93e7-6a9e249c8d3d\") " pod="kube-system/coredns-674b8bbfcf-fh6hq" Jan 28 01:24:40.088877 kubelet[2995]: I0128 01:24:40.088628 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptffd\" (UniqueName: \"kubernetes.io/projected/8c19397c-299f-4305-bb7b-810de8e940fe-kube-api-access-ptffd\") pod \"goldmane-666569f655-g77nq\" (UID: \"8c19397c-299f-4305-bb7b-810de8e940fe\") " pod="calico-system/goldmane-666569f655-g77nq" Jan 28 01:24:40.090690 kubelet[2995]: I0128 01:24:40.090551 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/acadd2db-d3cd-417f-93e7-6a9e249c8d3d-config-volume\") pod \"coredns-674b8bbfcf-fh6hq\" (UID: \"acadd2db-d3cd-417f-93e7-6a9e249c8d3d\") " pod="kube-system/coredns-674b8bbfcf-fh6hq" Jan 28 01:24:40.097520 kubelet[2995]: I0128 01:24:40.091105 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8c19397c-299f-4305-bb7b-810de8e940fe-goldmane-key-pair\") pod \"goldmane-666569f655-g77nq\" (UID: \"8c19397c-299f-4305-bb7b-810de8e940fe\") " pod="calico-system/goldmane-666569f655-g77nq" Jan 28 01:24:40.100224 kubelet[2995]: I0128 01:24:40.097971 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9-config-volume\") pod \"coredns-674b8bbfcf-qs5tp\" (UID: \"28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9\") " pod="kube-system/coredns-674b8bbfcf-qs5tp" Jan 28 01:24:40.100423 kubelet[2995]: I0128 01:24:40.100396 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c2d5c47-2f4c-4dc2-af4d-d250680defb0-tigera-ca-bundle\") pod \"calico-kube-controllers-6d85994946-hw4kg\" (UID: \"2c2d5c47-2f4c-4dc2-af4d-d250680defb0\") " pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" Jan 28 01:24:40.100554 kubelet[2995]: I0128 01:24:40.100528 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfmt8\" (UniqueName: \"kubernetes.io/projected/2c2d5c47-2f4c-4dc2-af4d-d250680defb0-kube-api-access-lfmt8\") pod \"calico-kube-controllers-6d85994946-hw4kg\" (UID: \"2c2d5c47-2f4c-4dc2-af4d-d250680defb0\") " pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" Jan 28 01:24:40.100783 kubelet[2995]: I0128 01:24:40.100762 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jqrw\" (UniqueName: \"kubernetes.io/projected/28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9-kube-api-access-2jqrw\") pod \"coredns-674b8bbfcf-qs5tp\" (UID: \"28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9\") " pod="kube-system/coredns-674b8bbfcf-qs5tp" Jan 28 01:24:40.101551 kubelet[2995]: I0128 01:24:40.100887 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c19397c-299f-4305-bb7b-810de8e940fe-config\") pod \"goldmane-666569f655-g77nq\" (UID: \"8c19397c-299f-4305-bb7b-810de8e940fe\") " pod="calico-system/goldmane-666569f655-g77nq" Jan 28 01:24:40.101825 kubelet[2995]: I0128 01:24:40.101657 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c19397c-299f-4305-bb7b-810de8e940fe-goldmane-ca-bundle\") pod \"goldmane-666569f655-g77nq\" (UID: \"8c19397c-299f-4305-bb7b-810de8e940fe\") " pod="calico-system/goldmane-666569f655-g77nq" Jan 28 01:24:40.121678 systemd[1]: Created slice kubepods-besteffort-pod63a937f8_f218_45d1_87c6_b75ad5fcad55.slice - libcontainer container kubepods-besteffort-pod63a937f8_f218_45d1_87c6_b75ad5fcad55.slice. Jan 28 01:24:40.257562 kubelet[2995]: I0128 01:24:40.256749 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b1cfce8a-501a-4088-a990-12172f5320b3-calico-apiserver-certs\") pod \"calico-apiserver-59889c77b-c5nrl\" (UID: \"b1cfce8a-501a-4088-a990-12172f5320b3\") " pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" Jan 28 01:24:40.280184 kubelet[2995]: I0128 01:24:40.260733 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prlsv\" (UniqueName: \"kubernetes.io/projected/b1cfce8a-501a-4088-a990-12172f5320b3-kube-api-access-prlsv\") pod \"calico-apiserver-59889c77b-c5nrl\" (UID: \"b1cfce8a-501a-4088-a990-12172f5320b3\") " pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" Jan 28 01:24:40.576769 systemd[1]: Created slice kubepods-besteffort-pod2c2d5c47_2f4c_4dc2_af4d_d250680defb0.slice - libcontainer container kubepods-besteffort-pod2c2d5c47_2f4c_4dc2_af4d_d250680defb0.slice. Jan 28 01:24:40.600565 systemd[1]: Created slice kubepods-besteffort-pod8c19397c_299f_4305_bb7b_810de8e940fe.slice - libcontainer container kubepods-besteffort-pod8c19397c_299f_4305_bb7b_810de8e940fe.slice. Jan 28 01:24:40.606388 containerd[1624]: time="2026-01-28T01:24:40.606241099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d85994946-hw4kg,Uid:2c2d5c47-2f4c-4dc2-af4d-d250680defb0,Namespace:calico-system,Attempt:0,}" Jan 28 01:24:40.650592 systemd[1]: Created slice kubepods-burstable-podacadd2db_d3cd_417f_93e7_6a9e249c8d3d.slice - libcontainer container kubepods-burstable-podacadd2db_d3cd_417f_93e7_6a9e249c8d3d.slice. Jan 28 01:24:40.658517 containerd[1624]: time="2026-01-28T01:24:40.658392272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g77nq,Uid:8c19397c-299f-4305-bb7b-810de8e940fe,Namespace:calico-system,Attempt:0,}" Jan 28 01:24:40.667952 kubelet[2995]: E0128 01:24:40.667917 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:40.669092 containerd[1624]: time="2026-01-28T01:24:40.668794115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fh6hq,Uid:acadd2db-d3cd-417f-93e7-6a9e249c8d3d,Namespace:kube-system,Attempt:0,}" Jan 28 01:24:40.675412 containerd[1624]: time="2026-01-28T01:24:40.675378172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f6f76b88b-kbkfp,Uid:77b9f804-ee0a-4a45-895f-6d2585222c51,Namespace:calico-system,Attempt:0,}" Jan 28 01:24:40.677937 systemd[1]: Created slice kubepods-besteffort-pod845c6024_31b8_4f74_be49_c76c18f222f2.slice - libcontainer container kubepods-besteffort-pod845c6024_31b8_4f74_be49_c76c18f222f2.slice. Jan 28 01:24:40.709424 containerd[1624]: time="2026-01-28T01:24:40.708965363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kgc2v,Uid:845c6024-31b8-4f74-be49-c76c18f222f2,Namespace:calico-system,Attempt:0,}" Jan 28 01:24:40.738976 containerd[1624]: time="2026-01-28T01:24:40.727963252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59889c77b-9msjb,Uid:63a937f8-f218-45d1-87c6-b75ad5fcad55,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:24:40.998183 kubelet[2995]: E0128 01:24:40.996357 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:41.006908 systemd[1]: Created slice kubepods-burstable-pod28ecd93a_a70c_43e2_8c1d_5a8ef124c4d9.slice - libcontainer container kubepods-burstable-pod28ecd93a_a70c_43e2_8c1d_5a8ef124c4d9.slice. Jan 28 01:24:41.062113 containerd[1624]: time="2026-01-28T01:24:41.061592725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 28 01:24:41.077407 kubelet[2995]: E0128 01:24:41.075960 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:41.131738 systemd[1]: Created slice kubepods-besteffort-podb1cfce8a_501a_4088_a990_12172f5320b3.slice - libcontainer container kubepods-besteffort-podb1cfce8a_501a_4088_a990_12172f5320b3.slice. Jan 28 01:24:41.154913 containerd[1624]: time="2026-01-28T01:24:41.150774127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qs5tp,Uid:28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9,Namespace:kube-system,Attempt:0,}" Jan 28 01:24:41.275704 containerd[1624]: time="2026-01-28T01:24:41.274119882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59889c77b-c5nrl,Uid:b1cfce8a-501a-4088-a990-12172f5320b3,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:24:42.764637 containerd[1624]: time="2026-01-28T01:24:42.508744702Z" level=error msg="Failed to destroy network for sandbox \"efda96bddc832d72bb97989dcea3823444064818fc751a486460e3eb38f3aa10\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:42.784305 containerd[1624]: time="2026-01-28T01:24:42.625613333Z" level=error msg="Failed to destroy network for sandbox \"4f228539aa525e577b48c1abcc3203838b9e98a90b9d6f0588f799159e031a5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:42.770784 systemd[1]: run-netns-cni\x2d3aa13883\x2dd41e\x2d2a27\x2d6653\x2d4277bdb066b2.mount: Deactivated successfully. Jan 28 01:24:42.803411 systemd[1]: run-netns-cni\x2d74546eb3\x2d0b05\x2d4a74\x2d9e52\x2d71e01fe7b46f.mount: Deactivated successfully. Jan 28 01:24:43.013258 containerd[1624]: time="2026-01-28T01:24:43.003472754Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d85994946-hw4kg,Uid:2c2d5c47-2f4c-4dc2-af4d-d250680defb0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f228539aa525e577b48c1abcc3203838b9e98a90b9d6f0588f799159e031a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:43.028241 containerd[1624]: time="2026-01-28T01:24:43.027680821Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f6f76b88b-kbkfp,Uid:77b9f804-ee0a-4a45-895f-6d2585222c51,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"efda96bddc832d72bb97989dcea3823444064818fc751a486460e3eb38f3aa10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:43.043516 kubelet[2995]: E0128 01:24:43.032428 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efda96bddc832d72bb97989dcea3823444064818fc751a486460e3eb38f3aa10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:43.043516 kubelet[2995]: E0128 01:24:43.032521 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efda96bddc832d72bb97989dcea3823444064818fc751a486460e3eb38f3aa10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f6f76b88b-kbkfp" Jan 28 01:24:43.043516 kubelet[2995]: E0128 01:24:43.032608 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efda96bddc832d72bb97989dcea3823444064818fc751a486460e3eb38f3aa10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f6f76b88b-kbkfp" Jan 28 01:24:43.044697 kubelet[2995]: E0128 01:24:43.032697 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f6f76b88b-kbkfp_calico-system(77b9f804-ee0a-4a45-895f-6d2585222c51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f6f76b88b-kbkfp_calico-system(77b9f804-ee0a-4a45-895f-6d2585222c51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"efda96bddc832d72bb97989dcea3823444064818fc751a486460e3eb38f3aa10\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f6f76b88b-kbkfp" podUID="77b9f804-ee0a-4a45-895f-6d2585222c51" Jan 28 01:24:43.044697 kubelet[2995]: E0128 01:24:43.033731 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f228539aa525e577b48c1abcc3203838b9e98a90b9d6f0588f799159e031a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:43.052948 kubelet[2995]: E0128 01:24:43.033898 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f228539aa525e577b48c1abcc3203838b9e98a90b9d6f0588f799159e031a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" Jan 28 01:24:43.052948 kubelet[2995]: E0128 01:24:43.049923 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f228539aa525e577b48c1abcc3203838b9e98a90b9d6f0588f799159e031a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" Jan 28 01:24:43.052948 kubelet[2995]: E0128 01:24:43.049983 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d85994946-hw4kg_calico-system(2c2d5c47-2f4c-4dc2-af4d-d250680defb0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d85994946-hw4kg_calico-system(2c2d5c47-2f4c-4dc2-af4d-d250680defb0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f228539aa525e577b48c1abcc3203838b9e98a90b9d6f0588f799159e031a5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:24:43.262540 containerd[1624]: time="2026-01-28T01:24:43.262401232Z" level=error msg="Failed to destroy network for sandbox \"ce570fe77d0d2dfff6ad5f5da36dd7f780be2e6abfd3f2cf976550a074c1537c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:43.278414 systemd[1]: run-netns-cni\x2da7c46988\x2d92ac\x2db01e\x2d00a9\x2d3cdd46e48d17.mount: Deactivated successfully. Jan 28 01:24:43.322369 containerd[1624]: time="2026-01-28T01:24:43.294236847Z" level=error msg="Failed to destroy network for sandbox \"047d3f332d19c5dfcbfe21680f6e642bfd538be3d1c60a52ade3fa51515fbda6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:43.305672 systemd[1]: run-netns-cni\x2d7659b46b\x2d36ac\x2d1432\x2d4387\x2d22b205332501.mount: Deactivated successfully. Jan 28 01:24:43.376790 containerd[1624]: time="2026-01-28T01:24:43.376724363Z" level=error msg="Failed to destroy network for sandbox \"a995234091755ef23625c3f06bc66aeec2684cc48d759e486ad350e6bfbe452f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:43.393345 systemd[1]: run-netns-cni\x2da501a807\x2d2f8b\x2dbf9c\x2da875\x2d5cb1ce078e8d.mount: Deactivated successfully. Jan 28 01:24:43.451648 containerd[1624]: time="2026-01-28T01:24:43.451579738Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fh6hq,Uid:acadd2db-d3cd-417f-93e7-6a9e249c8d3d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce570fe77d0d2dfff6ad5f5da36dd7f780be2e6abfd3f2cf976550a074c1537c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:43.457565 kubelet[2995]: E0128 01:24:43.457203 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce570fe77d0d2dfff6ad5f5da36dd7f780be2e6abfd3f2cf976550a074c1537c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:43.457565 kubelet[2995]: E0128 01:24:43.457332 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce570fe77d0d2dfff6ad5f5da36dd7f780be2e6abfd3f2cf976550a074c1537c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fh6hq" Jan 28 01:24:43.457565 kubelet[2995]: E0128 01:24:43.457366 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce570fe77d0d2dfff6ad5f5da36dd7f780be2e6abfd3f2cf976550a074c1537c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fh6hq" Jan 28 01:24:43.457943 kubelet[2995]: E0128 01:24:43.457449 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fh6hq_kube-system(acadd2db-d3cd-417f-93e7-6a9e249c8d3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fh6hq_kube-system(acadd2db-d3cd-417f-93e7-6a9e249c8d3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce570fe77d0d2dfff6ad5f5da36dd7f780be2e6abfd3f2cf976550a074c1537c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fh6hq" podUID="acadd2db-d3cd-417f-93e7-6a9e249c8d3d" Jan 28 01:24:43.458407 containerd[1624]: time="2026-01-28T01:24:43.456974451Z" level=error msg="Failed to destroy network for sandbox \"35493423da316c71620d63bf1836f62369e27c4ea0c2e7e00cee289ff9288367\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:43.465961 containerd[1624]: time="2026-01-28T01:24:43.465825966Z" level=error msg="Failed to destroy network for sandbox \"fce1294b5693bbdd790b2de49db6069215966e1bed656fb23e98b93145bd7771\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:43.489197 containerd[1624]: time="2026-01-28T01:24:43.488959086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qs5tp,Uid:28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"047d3f332d19c5dfcbfe21680f6e642bfd538be3d1c60a52ade3fa51515fbda6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:43.498236 kubelet[2995]: E0128 01:24:43.497414 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"047d3f332d19c5dfcbfe21680f6e642bfd538be3d1c60a52ade3fa51515fbda6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:43.498979 kubelet[2995]: E0128 01:24:43.498476 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"047d3f332d19c5dfcbfe21680f6e642bfd538be3d1c60a52ade3fa51515fbda6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qs5tp" Jan 28 01:24:43.498979 kubelet[2995]: E0128 01:24:43.498521 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"047d3f332d19c5dfcbfe21680f6e642bfd538be3d1c60a52ade3fa51515fbda6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qs5tp" Jan 28 01:24:43.498979 kubelet[2995]: E0128 01:24:43.498605 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qs5tp_kube-system(28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qs5tp_kube-system(28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"047d3f332d19c5dfcbfe21680f6e642bfd538be3d1c60a52ade3fa51515fbda6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qs5tp" podUID="28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9" Jan 28 01:24:43.527360 containerd[1624]: time="2026-01-28T01:24:43.524990978Z" level=error msg="Failed to destroy network for sandbox \"2f54a0755114a505cf6f5dcc97847fcc3a1a955490155f5adb237ac688660ece\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:43.529607 containerd[1624]: time="2026-01-28T01:24:43.526428595Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g77nq,Uid:8c19397c-299f-4305-bb7b-810de8e940fe,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a995234091755ef23625c3f06bc66aeec2684cc48d759e486ad350e6bfbe452f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:43.644957 containerd[1624]: time="2026-01-28T01:24:43.644421451Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59889c77b-c5nrl,Uid:b1cfce8a-501a-4088-a990-12172f5320b3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"35493423da316c71620d63bf1836f62369e27c4ea0c2e7e00cee289ff9288367\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:43.649085 containerd[1624]: time="2026-01-28T01:24:43.648385190Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59889c77b-9msjb,Uid:63a937f8-f218-45d1-87c6-b75ad5fcad55,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fce1294b5693bbdd790b2de49db6069215966e1bed656fb23e98b93145bd7771\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:43.650767 kubelet[2995]: E0128 01:24:43.650507 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fce1294b5693bbdd790b2de49db6069215966e1bed656fb23e98b93145bd7771\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:43.654509 kubelet[2995]: E0128 01:24:43.650952 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fce1294b5693bbdd790b2de49db6069215966e1bed656fb23e98b93145bd7771\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" Jan 28 01:24:43.654802 kubelet[2995]: E0128 01:24:43.652948 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35493423da316c71620d63bf1836f62369e27c4ea0c2e7e00cee289ff9288367\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:43.654867 kubelet[2995]: E0128 01:24:43.654802 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35493423da316c71620d63bf1836f62369e27c4ea0c2e7e00cee289ff9288367\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" Jan 28 01:24:43.654867 kubelet[2995]: E0128 01:24:43.654835 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35493423da316c71620d63bf1836f62369e27c4ea0c2e7e00cee289ff9288367\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" Jan 28 01:24:43.657571 kubelet[2995]: E0128 01:24:43.650491 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a995234091755ef23625c3f06bc66aeec2684cc48d759e486ad350e6bfbe452f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:43.658423 kubelet[2995]: E0128 01:24:43.657669 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a995234091755ef23625c3f06bc66aeec2684cc48d759e486ad350e6bfbe452f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-g77nq" Jan 28 01:24:43.658423 kubelet[2995]: E0128 01:24:43.657702 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a995234091755ef23625c3f06bc66aeec2684cc48d759e486ad350e6bfbe452f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-g77nq" Jan 28 01:24:43.658423 kubelet[2995]: E0128 01:24:43.657751 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-g77nq_calico-system(8c19397c-299f-4305-bb7b-810de8e940fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-g77nq_calico-system(8c19397c-299f-4305-bb7b-810de8e940fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a995234091755ef23625c3f06bc66aeec2684cc48d759e486ad350e6bfbe452f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:24:43.658670 kubelet[2995]: E0128 01:24:43.657112 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59889c77b-c5nrl_calico-apiserver(b1cfce8a-501a-4088-a990-12172f5320b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59889c77b-c5nrl_calico-apiserver(b1cfce8a-501a-4088-a990-12172f5320b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35493423da316c71620d63bf1836f62369e27c4ea0c2e7e00cee289ff9288367\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:24:43.658670 kubelet[2995]: E0128 01:24:43.658519 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fce1294b5693bbdd790b2de49db6069215966e1bed656fb23e98b93145bd7771\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" Jan 28 01:24:43.658885 kubelet[2995]: E0128 01:24:43.658591 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59889c77b-9msjb_calico-apiserver(63a937f8-f218-45d1-87c6-b75ad5fcad55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59889c77b-9msjb_calico-apiserver(63a937f8-f218-45d1-87c6-b75ad5fcad55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fce1294b5693bbdd790b2de49db6069215966e1bed656fb23e98b93145bd7771\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:24:43.756750 systemd[1]: run-netns-cni\x2de14a18b0\x2d19af\x2d422e\x2d0f87\x2d26004a28ea55.mount: Deactivated successfully. Jan 28 01:24:43.757991 systemd[1]: run-netns-cni\x2d334bac8e\x2d6905\x2de69f\x2dde21\x2d2dfcd289df6e.mount: Deactivated successfully. Jan 28 01:24:43.758298 systemd[1]: run-netns-cni\x2d03fcae3d\x2d4689\x2d4eba\x2da223\x2d845e0a84e6f6.mount: Deactivated successfully. Jan 28 01:24:43.770268 containerd[1624]: time="2026-01-28T01:24:43.770178266Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kgc2v,Uid:845c6024-31b8-4f74-be49-c76c18f222f2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f54a0755114a505cf6f5dcc97847fcc3a1a955490155f5adb237ac688660ece\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:43.870987 kubelet[2995]: E0128 01:24:43.816243 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f54a0755114a505cf6f5dcc97847fcc3a1a955490155f5adb237ac688660ece\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:43.870987 kubelet[2995]: E0128 01:24:43.817407 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f54a0755114a505cf6f5dcc97847fcc3a1a955490155f5adb237ac688660ece\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kgc2v" Jan 28 01:24:43.870987 kubelet[2995]: E0128 01:24:43.817512 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f54a0755114a505cf6f5dcc97847fcc3a1a955490155f5adb237ac688660ece\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kgc2v" Jan 28 01:24:43.871599 kubelet[2995]: E0128 01:24:43.821253 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kgc2v_calico-system(845c6024-31b8-4f74-be49-c76c18f222f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kgc2v_calico-system(845c6024-31b8-4f74-be49-c76c18f222f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f54a0755114a505cf6f5dcc97847fcc3a1a955490155f5adb237ac688660ece\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:24:55.123529 containerd[1624]: time="2026-01-28T01:24:55.120778039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d85994946-hw4kg,Uid:2c2d5c47-2f4c-4dc2-af4d-d250680defb0,Namespace:calico-system,Attempt:0,}" Jan 28 01:24:56.066561 kubelet[2995]: E0128 01:24:56.064369 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:56.070919 containerd[1624]: time="2026-01-28T01:24:56.068875337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qs5tp,Uid:28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9,Namespace:kube-system,Attempt:0,}" Jan 28 01:24:56.074514 containerd[1624]: time="2026-01-28T01:24:56.074478538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f6f76b88b-kbkfp,Uid:77b9f804-ee0a-4a45-895f-6d2585222c51,Namespace:calico-system,Attempt:0,}" Jan 28 01:24:56.623250 containerd[1624]: time="2026-01-28T01:24:56.623183939Z" level=error msg="Failed to destroy network for sandbox \"f4948a01aa2e33fd5cc07d5b8dd5392963b6dc06e6976520bd8de08f3b86698e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:56.669980 systemd[1]: run-netns-cni\x2d0ff81e1d\x2d9d5b\x2d0a1f\x2dab04\x2d3241f9aca9f3.mount: Deactivated successfully. Jan 28 01:24:56.683539 containerd[1624]: time="2026-01-28T01:24:56.680700844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d85994946-hw4kg,Uid:2c2d5c47-2f4c-4dc2-af4d-d250680defb0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4948a01aa2e33fd5cc07d5b8dd5392963b6dc06e6976520bd8de08f3b86698e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:56.686511 kubelet[2995]: E0128 01:24:56.685825 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4948a01aa2e33fd5cc07d5b8dd5392963b6dc06e6976520bd8de08f3b86698e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:56.696801 kubelet[2995]: E0128 01:24:56.685980 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4948a01aa2e33fd5cc07d5b8dd5392963b6dc06e6976520bd8de08f3b86698e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" Jan 28 01:24:56.696801 kubelet[2995]: E0128 01:24:56.689709 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4948a01aa2e33fd5cc07d5b8dd5392963b6dc06e6976520bd8de08f3b86698e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" Jan 28 01:24:56.696801 kubelet[2995]: E0128 01:24:56.693910 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d85994946-hw4kg_calico-system(2c2d5c47-2f4c-4dc2-af4d-d250680defb0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d85994946-hw4kg_calico-system(2c2d5c47-2f4c-4dc2-af4d-d250680defb0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4948a01aa2e33fd5cc07d5b8dd5392963b6dc06e6976520bd8de08f3b86698e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:24:57.077581 containerd[1624]: time="2026-01-28T01:24:57.075694398Z" level=error msg="Failed to destroy network for sandbox \"7c4c06bddd559fd1b73af04f0999716135c8bfeed50eab8db05b5a00df8c449a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:57.103617 systemd[1]: run-netns-cni\x2d5161a911\x2d91eb\x2de45c\x2d64d2\x2d2ed1cdde4718.mount: Deactivated successfully. Jan 28 01:24:57.109670 kubelet[2995]: E0128 01:24:57.109641 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:57.121436 containerd[1624]: time="2026-01-28T01:24:57.110516921Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f6f76b88b-kbkfp,Uid:77b9f804-ee0a-4a45-895f-6d2585222c51,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c4c06bddd559fd1b73af04f0999716135c8bfeed50eab8db05b5a00df8c449a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:57.125298 containerd[1624]: time="2026-01-28T01:24:57.122445610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fh6hq,Uid:acadd2db-d3cd-417f-93e7-6a9e249c8d3d,Namespace:kube-system,Attempt:0,}" Jan 28 01:24:57.125415 kubelet[2995]: E0128 01:24:57.124380 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c4c06bddd559fd1b73af04f0999716135c8bfeed50eab8db05b5a00df8c449a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:57.125479 kubelet[2995]: E0128 01:24:57.125246 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c4c06bddd559fd1b73af04f0999716135c8bfeed50eab8db05b5a00df8c449a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f6f76b88b-kbkfp" Jan 28 01:24:57.125928 kubelet[2995]: E0128 01:24:57.125747 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c4c06bddd559fd1b73af04f0999716135c8bfeed50eab8db05b5a00df8c449a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f6f76b88b-kbkfp" Jan 28 01:24:57.126799 containerd[1624]: time="2026-01-28T01:24:57.125756214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g77nq,Uid:8c19397c-299f-4305-bb7b-810de8e940fe,Namespace:calico-system,Attempt:0,}" Jan 28 01:24:57.132624 kubelet[2995]: E0128 01:24:57.132398 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f6f76b88b-kbkfp_calico-system(77b9f804-ee0a-4a45-895f-6d2585222c51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f6f76b88b-kbkfp_calico-system(77b9f804-ee0a-4a45-895f-6d2585222c51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c4c06bddd559fd1b73af04f0999716135c8bfeed50eab8db05b5a00df8c449a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f6f76b88b-kbkfp" podUID="77b9f804-ee0a-4a45-895f-6d2585222c51" Jan 28 01:24:57.147956 containerd[1624]: time="2026-01-28T01:24:57.147745874Z" level=error msg="Failed to destroy network for sandbox \"237194ca6e20b68882c97594875d22636b0729a8b3bc8d4e23840146ef5c2656\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:57.188424 systemd[1]: run-netns-cni\x2de2e79d54\x2db09d\x2de2d5\x2dc143\x2d67a79dc309f4.mount: Deactivated successfully. Jan 28 01:24:57.265612 containerd[1624]: time="2026-01-28T01:24:57.265309966Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qs5tp,Uid:28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"237194ca6e20b68882c97594875d22636b0729a8b3bc8d4e23840146ef5c2656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:57.279741 kubelet[2995]: E0128 01:24:57.279675 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"237194ca6e20b68882c97594875d22636b0729a8b3bc8d4e23840146ef5c2656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:57.280200 kubelet[2995]: E0128 01:24:57.279994 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"237194ca6e20b68882c97594875d22636b0729a8b3bc8d4e23840146ef5c2656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qs5tp" Jan 28 01:24:57.280950 kubelet[2995]: E0128 01:24:57.280496 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"237194ca6e20b68882c97594875d22636b0729a8b3bc8d4e23840146ef5c2656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qs5tp" Jan 28 01:24:57.280950 kubelet[2995]: E0128 01:24:57.280670 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qs5tp_kube-system(28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qs5tp_kube-system(28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"237194ca6e20b68882c97594875d22636b0729a8b3bc8d4e23840146ef5c2656\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qs5tp" podUID="28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9" Jan 28 01:24:57.812103 containerd[1624]: time="2026-01-28T01:24:57.807986477Z" level=error msg="Failed to destroy network for sandbox \"85d0bf9c5cd3f97195ed5ce380ca00aa5e60796b2d488316a0f59702cfc39403\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:57.820397 systemd[1]: run-netns-cni\x2d83c6913a\x2d77d7\x2de4fc\x2dc716\x2dad6469ff5727.mount: Deactivated successfully. Jan 28 01:24:57.865535 containerd[1624]: time="2026-01-28T01:24:57.847491929Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fh6hq,Uid:acadd2db-d3cd-417f-93e7-6a9e249c8d3d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"85d0bf9c5cd3f97195ed5ce380ca00aa5e60796b2d488316a0f59702cfc39403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:57.870217 kubelet[2995]: E0128 01:24:57.855090 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85d0bf9c5cd3f97195ed5ce380ca00aa5e60796b2d488316a0f59702cfc39403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:57.870217 kubelet[2995]: E0128 01:24:57.855225 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85d0bf9c5cd3f97195ed5ce380ca00aa5e60796b2d488316a0f59702cfc39403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fh6hq" Jan 28 01:24:57.870217 kubelet[2995]: E0128 01:24:57.855256 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85d0bf9c5cd3f97195ed5ce380ca00aa5e60796b2d488316a0f59702cfc39403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fh6hq" Jan 28 01:24:57.870480 kubelet[2995]: E0128 01:24:57.855322 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fh6hq_kube-system(acadd2db-d3cd-417f-93e7-6a9e249c8d3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fh6hq_kube-system(acadd2db-d3cd-417f-93e7-6a9e249c8d3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85d0bf9c5cd3f97195ed5ce380ca00aa5e60796b2d488316a0f59702cfc39403\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fh6hq" podUID="acadd2db-d3cd-417f-93e7-6a9e249c8d3d" Jan 28 01:24:57.945168 containerd[1624]: time="2026-01-28T01:24:57.938340752Z" level=error msg="Failed to destroy network for sandbox \"bc7ce5c9dfeb7120efc8cd0dacb886c5080d6e3ccdcf41e242d5d277df9fbe37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:57.947750 systemd[1]: run-netns-cni\x2d827bed62\x2d5b25\x2da3e9\x2d27dd\x2d7ff642466c2f.mount: Deactivated successfully. Jan 28 01:24:57.963829 containerd[1624]: time="2026-01-28T01:24:57.961306080Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g77nq,Uid:8c19397c-299f-4305-bb7b-810de8e940fe,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc7ce5c9dfeb7120efc8cd0dacb886c5080d6e3ccdcf41e242d5d277df9fbe37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:57.965391 kubelet[2995]: E0128 01:24:57.964797 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc7ce5c9dfeb7120efc8cd0dacb886c5080d6e3ccdcf41e242d5d277df9fbe37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:57.965391 kubelet[2995]: E0128 01:24:57.964882 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc7ce5c9dfeb7120efc8cd0dacb886c5080d6e3ccdcf41e242d5d277df9fbe37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-g77nq" Jan 28 01:24:57.965391 kubelet[2995]: E0128 01:24:57.964917 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc7ce5c9dfeb7120efc8cd0dacb886c5080d6e3ccdcf41e242d5d277df9fbe37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-g77nq" Jan 28 01:24:57.974308 kubelet[2995]: E0128 01:24:57.964982 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-g77nq_calico-system(8c19397c-299f-4305-bb7b-810de8e940fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-g77nq_calico-system(8c19397c-299f-4305-bb7b-810de8e940fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc7ce5c9dfeb7120efc8cd0dacb886c5080d6e3ccdcf41e242d5d277df9fbe37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:24:58.077378 containerd[1624]: time="2026-01-28T01:24:58.073242872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59889c77b-c5nrl,Uid:b1cfce8a-501a-4088-a990-12172f5320b3,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:24:58.085222 containerd[1624]: time="2026-01-28T01:24:58.079853560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kgc2v,Uid:845c6024-31b8-4f74-be49-c76c18f222f2,Namespace:calico-system,Attempt:0,}" Jan 28 01:24:58.085222 containerd[1624]: time="2026-01-28T01:24:58.080218760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59889c77b-9msjb,Uid:63a937f8-f218-45d1-87c6-b75ad5fcad55,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:24:59.253782 containerd[1624]: time="2026-01-28T01:24:59.253723860Z" level=error msg="Failed to destroy network for sandbox \"5437da9638eb4f76e31b636ca6790b13489d6a4f1fb49b563efd95eadb54cf7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:59.263924 systemd[1]: run-netns-cni\x2db886ff36\x2dde49\x2d0b9c\x2d1b30\x2d387b9a10ff6b.mount: Deactivated successfully. Jan 28 01:24:59.305101 containerd[1624]: time="2026-01-28T01:24:59.301224000Z" level=error msg="Failed to destroy network for sandbox \"1c5a2bfd6943592602f33895f6d6509b99ec979b07c7e2db15857b86931fae9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:59.308375 containerd[1624]: time="2026-01-28T01:24:59.308313655Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59889c77b-c5nrl,Uid:b1cfce8a-501a-4088-a990-12172f5320b3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5437da9638eb4f76e31b636ca6790b13489d6a4f1fb49b563efd95eadb54cf7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:59.314851 systemd[1]: run-netns-cni\x2d94f51885\x2dbe0b\x2d345b\x2d621a\x2d51d618d3e835.mount: Deactivated successfully. Jan 28 01:24:59.336486 containerd[1624]: time="2026-01-28T01:24:59.332984433Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kgc2v,Uid:845c6024-31b8-4f74-be49-c76c18f222f2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c5a2bfd6943592602f33895f6d6509b99ec979b07c7e2db15857b86931fae9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:59.338428 kubelet[2995]: E0128 01:24:59.335897 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5437da9638eb4f76e31b636ca6790b13489d6a4f1fb49b563efd95eadb54cf7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:59.354356 kubelet[2995]: E0128 01:24:59.335983 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5437da9638eb4f76e31b636ca6790b13489d6a4f1fb49b563efd95eadb54cf7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" Jan 28 01:24:59.354356 kubelet[2995]: E0128 01:24:59.352545 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c5a2bfd6943592602f33895f6d6509b99ec979b07c7e2db15857b86931fae9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:59.354356 kubelet[2995]: E0128 01:24:59.353236 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c5a2bfd6943592602f33895f6d6509b99ec979b07c7e2db15857b86931fae9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kgc2v" Jan 28 01:24:59.354356 kubelet[2995]: E0128 01:24:59.353268 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c5a2bfd6943592602f33895f6d6509b99ec979b07c7e2db15857b86931fae9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kgc2v" Jan 28 01:24:59.354626 kubelet[2995]: E0128 01:24:59.353335 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kgc2v_calico-system(845c6024-31b8-4f74-be49-c76c18f222f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kgc2v_calico-system(845c6024-31b8-4f74-be49-c76c18f222f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c5a2bfd6943592602f33895f6d6509b99ec979b07c7e2db15857b86931fae9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:24:59.354626 kubelet[2995]: E0128 01:24:59.353418 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5437da9638eb4f76e31b636ca6790b13489d6a4f1fb49b563efd95eadb54cf7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" Jan 28 01:24:59.378832 kubelet[2995]: E0128 01:24:59.373670 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59889c77b-c5nrl_calico-apiserver(b1cfce8a-501a-4088-a990-12172f5320b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59889c77b-c5nrl_calico-apiserver(b1cfce8a-501a-4088-a990-12172f5320b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5437da9638eb4f76e31b636ca6790b13489d6a4f1fb49b563efd95eadb54cf7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:24:59.574502 containerd[1624]: time="2026-01-28T01:24:59.574362715Z" level=error msg="Failed to destroy network for sandbox \"e58add0854d2307e0bbc06e1c97cd6ed49b890d9175314e074e731faa3d8555f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:59.619334 systemd[1]: run-netns-cni\x2d5636663b\x2d7e56\x2d9399\x2df925\x2d15671a286804.mount: Deactivated successfully. Jan 28 01:24:59.626166 containerd[1624]: time="2026-01-28T01:24:59.625720329Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59889c77b-9msjb,Uid:63a937f8-f218-45d1-87c6-b75ad5fcad55,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58add0854d2307e0bbc06e1c97cd6ed49b890d9175314e074e731faa3d8555f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:59.632099 kubelet[2995]: E0128 01:24:59.631845 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58add0854d2307e0bbc06e1c97cd6ed49b890d9175314e074e731faa3d8555f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:59.632099 kubelet[2995]: E0128 01:24:59.631926 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58add0854d2307e0bbc06e1c97cd6ed49b890d9175314e074e731faa3d8555f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" Jan 28 01:24:59.632099 kubelet[2995]: E0128 01:24:59.631956 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58add0854d2307e0bbc06e1c97cd6ed49b890d9175314e074e731faa3d8555f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" Jan 28 01:24:59.633454 kubelet[2995]: E0128 01:24:59.632934 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59889c77b-9msjb_calico-apiserver(63a937f8-f218-45d1-87c6-b75ad5fcad55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59889c77b-9msjb_calico-apiserver(63a937f8-f218-45d1-87c6-b75ad5fcad55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e58add0854d2307e0bbc06e1c97cd6ed49b890d9175314e074e731faa3d8555f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:25:09.091407 kubelet[2995]: E0128 01:25:09.086460 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:09.115403 containerd[1624]: time="2026-01-28T01:25:09.115361757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qs5tp,Uid:28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9,Namespace:kube-system,Attempt:0,}" Jan 28 01:25:09.882898 containerd[1624]: time="2026-01-28T01:25:09.881870924Z" level=error msg="Failed to destroy network for sandbox \"c1b47ebfdadfd3c83d0f0e48fa3db80a4ffe91e66bc232d49223b3bded56e7db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:09.892285 systemd[1]: run-netns-cni\x2deb3f00fe\x2d327e\x2de41b\x2d401e\x2d95d1bcdfe12b.mount: Deactivated successfully. Jan 28 01:25:09.958267 containerd[1624]: time="2026-01-28T01:25:09.957773833Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qs5tp,Uid:28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1b47ebfdadfd3c83d0f0e48fa3db80a4ffe91e66bc232d49223b3bded56e7db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:09.959920 kubelet[2995]: E0128 01:25:09.959480 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1b47ebfdadfd3c83d0f0e48fa3db80a4ffe91e66bc232d49223b3bded56e7db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:09.959920 kubelet[2995]: E0128 01:25:09.959751 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1b47ebfdadfd3c83d0f0e48fa3db80a4ffe91e66bc232d49223b3bded56e7db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qs5tp" Jan 28 01:25:09.959920 kubelet[2995]: E0128 01:25:09.959789 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1b47ebfdadfd3c83d0f0e48fa3db80a4ffe91e66bc232d49223b3bded56e7db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qs5tp" Jan 28 01:25:09.960352 kubelet[2995]: E0128 01:25:09.959919 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qs5tp_kube-system(28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qs5tp_kube-system(28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1b47ebfdadfd3c83d0f0e48fa3db80a4ffe91e66bc232d49223b3bded56e7db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qs5tp" podUID="28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9" Jan 28 01:25:11.264953 containerd[1624]: time="2026-01-28T01:25:11.264222572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59889c77b-c5nrl,Uid:b1cfce8a-501a-4088-a990-12172f5320b3,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:25:11.264953 containerd[1624]: time="2026-01-28T01:25:11.264489585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d85994946-hw4kg,Uid:2c2d5c47-2f4c-4dc2-af4d-d250680defb0,Namespace:calico-system,Attempt:0,}" Jan 28 01:25:12.065992 containerd[1624]: time="2026-01-28T01:25:12.063430043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f6f76b88b-kbkfp,Uid:77b9f804-ee0a-4a45-895f-6d2585222c51,Namespace:calico-system,Attempt:0,}" Jan 28 01:25:12.265296 containerd[1624]: time="2026-01-28T01:25:12.264314071Z" level=error msg="Failed to destroy network for sandbox \"18a9e54ee5b7808a9b0043c810b26ea7deb620c6e1f6c93c5716f2308c3f9a8e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:12.278825 systemd[1]: run-netns-cni\x2dcf29793d\x2dfc14\x2d9998\x2d7219\x2dbf57f55cf140.mount: Deactivated successfully. Jan 28 01:25:12.285804 containerd[1624]: time="2026-01-28T01:25:12.285759353Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d85994946-hw4kg,Uid:2c2d5c47-2f4c-4dc2-af4d-d250680defb0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"18a9e54ee5b7808a9b0043c810b26ea7deb620c6e1f6c93c5716f2308c3f9a8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:12.288440 kubelet[2995]: E0128 01:25:12.287655 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18a9e54ee5b7808a9b0043c810b26ea7deb620c6e1f6c93c5716f2308c3f9a8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:12.288440 kubelet[2995]: E0128 01:25:12.287724 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18a9e54ee5b7808a9b0043c810b26ea7deb620c6e1f6c93c5716f2308c3f9a8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" Jan 28 01:25:12.288440 kubelet[2995]: E0128 01:25:12.287753 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18a9e54ee5b7808a9b0043c810b26ea7deb620c6e1f6c93c5716f2308c3f9a8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" Jan 28 01:25:12.289350 kubelet[2995]: E0128 01:25:12.287812 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d85994946-hw4kg_calico-system(2c2d5c47-2f4c-4dc2-af4d-d250680defb0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d85994946-hw4kg_calico-system(2c2d5c47-2f4c-4dc2-af4d-d250680defb0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18a9e54ee5b7808a9b0043c810b26ea7deb620c6e1f6c93c5716f2308c3f9a8e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:25:12.367486 containerd[1624]: time="2026-01-28T01:25:12.366591075Z" level=error msg="Failed to destroy network for sandbox \"c7baa4e1eda5db7625d3bc00131aa07c5e313adcf170449ed2bf66a47d963f23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:12.421902 systemd[1]: run-netns-cni\x2d924b6aca\x2d30b8\x2d0d88\x2d65ad\x2d6467cb8ae611.mount: Deactivated successfully. Jan 28 01:25:12.423866 containerd[1624]: time="2026-01-28T01:25:12.422367905Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59889c77b-c5nrl,Uid:b1cfce8a-501a-4088-a990-12172f5320b3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7baa4e1eda5db7625d3bc00131aa07c5e313adcf170449ed2bf66a47d963f23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:12.431661 kubelet[2995]: E0128 01:25:12.422833 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7baa4e1eda5db7625d3bc00131aa07c5e313adcf170449ed2bf66a47d963f23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:12.431661 kubelet[2995]: E0128 01:25:12.424423 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7baa4e1eda5db7625d3bc00131aa07c5e313adcf170449ed2bf66a47d963f23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" Jan 28 01:25:12.431661 kubelet[2995]: E0128 01:25:12.424463 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7baa4e1eda5db7625d3bc00131aa07c5e313adcf170449ed2bf66a47d963f23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" Jan 28 01:25:12.431874 kubelet[2995]: E0128 01:25:12.431515 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59889c77b-c5nrl_calico-apiserver(b1cfce8a-501a-4088-a990-12172f5320b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59889c77b-c5nrl_calico-apiserver(b1cfce8a-501a-4088-a990-12172f5320b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7baa4e1eda5db7625d3bc00131aa07c5e313adcf170449ed2bf66a47d963f23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:25:12.754568 containerd[1624]: time="2026-01-28T01:25:12.744426768Z" level=error msg="Failed to destroy network for sandbox \"8934fed384c0962aa771052808114fbe6568b414405def4543bc056c76dd96bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:12.761765 containerd[1624]: time="2026-01-28T01:25:12.761433889Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f6f76b88b-kbkfp,Uid:77b9f804-ee0a-4a45-895f-6d2585222c51,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8934fed384c0962aa771052808114fbe6568b414405def4543bc056c76dd96bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:12.761813 systemd[1]: run-netns-cni\x2d73e07dfb\x2dbb4c\x2d4339\x2d942c\x2d6fa212e5f5a6.mount: Deactivated successfully. Jan 28 01:25:12.764845 kubelet[2995]: E0128 01:25:12.764580 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8934fed384c0962aa771052808114fbe6568b414405def4543bc056c76dd96bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:12.764845 kubelet[2995]: E0128 01:25:12.764720 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8934fed384c0962aa771052808114fbe6568b414405def4543bc056c76dd96bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f6f76b88b-kbkfp" Jan 28 01:25:12.764845 kubelet[2995]: E0128 01:25:12.764755 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8934fed384c0962aa771052808114fbe6568b414405def4543bc056c76dd96bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f6f76b88b-kbkfp" Jan 28 01:25:12.765301 kubelet[2995]: E0128 01:25:12.764817 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f6f76b88b-kbkfp_calico-system(77b9f804-ee0a-4a45-895f-6d2585222c51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f6f76b88b-kbkfp_calico-system(77b9f804-ee0a-4a45-895f-6d2585222c51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8934fed384c0962aa771052808114fbe6568b414405def4543bc056c76dd96bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f6f76b88b-kbkfp" podUID="77b9f804-ee0a-4a45-895f-6d2585222c51" Jan 28 01:25:13.076527 kubelet[2995]: E0128 01:25:13.075289 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:13.079266 containerd[1624]: time="2026-01-28T01:25:13.078440907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59889c77b-9msjb,Uid:63a937f8-f218-45d1-87c6-b75ad5fcad55,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:25:13.079696 containerd[1624]: time="2026-01-28T01:25:13.079601580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fh6hq,Uid:acadd2db-d3cd-417f-93e7-6a9e249c8d3d,Namespace:kube-system,Attempt:0,}" Jan 28 01:25:13.079884 containerd[1624]: time="2026-01-28T01:25:13.079788959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g77nq,Uid:8c19397c-299f-4305-bb7b-810de8e940fe,Namespace:calico-system,Attempt:0,}" Jan 28 01:25:14.119278 containerd[1624]: time="2026-01-28T01:25:14.117487516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kgc2v,Uid:845c6024-31b8-4f74-be49-c76c18f222f2,Namespace:calico-system,Attempt:0,}" Jan 28 01:25:14.596376 containerd[1624]: time="2026-01-28T01:25:14.588592978Z" level=error msg="Failed to destroy network for sandbox \"93665cdfb4d38b0171b37ebef4921464827008aec98bd21eb941809811f8504c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:14.599410 systemd[1]: run-netns-cni\x2da7a1be2c\x2d58ab\x2d0063\x2d5d33\x2d243352e3be47.mount: Deactivated successfully. Jan 28 01:25:14.669338 containerd[1624]: time="2026-01-28T01:25:14.668238928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g77nq,Uid:8c19397c-299f-4305-bb7b-810de8e940fe,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"93665cdfb4d38b0171b37ebef4921464827008aec98bd21eb941809811f8504c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:14.671770 kubelet[2995]: E0128 01:25:14.670504 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93665cdfb4d38b0171b37ebef4921464827008aec98bd21eb941809811f8504c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:14.671770 kubelet[2995]: E0128 01:25:14.670648 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93665cdfb4d38b0171b37ebef4921464827008aec98bd21eb941809811f8504c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-g77nq" Jan 28 01:25:14.671770 kubelet[2995]: E0128 01:25:14.670684 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93665cdfb4d38b0171b37ebef4921464827008aec98bd21eb941809811f8504c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-g77nq" Jan 28 01:25:14.672699 kubelet[2995]: E0128 01:25:14.670762 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-g77nq_calico-system(8c19397c-299f-4305-bb7b-810de8e940fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-g77nq_calico-system(8c19397c-299f-4305-bb7b-810de8e940fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93665cdfb4d38b0171b37ebef4921464827008aec98bd21eb941809811f8504c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:25:14.999392 containerd[1624]: time="2026-01-28T01:25:14.987627403Z" level=error msg="Failed to destroy network for sandbox \"0e01e567a816190e08b366561913a69b03ab6ce9703f2749761ffe4d86caa950\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:15.017345 systemd[1]: run-netns-cni\x2d1b848f96\x2d60fa\x2de788\x2de4c2\x2d0683f72f177e.mount: Deactivated successfully. Jan 28 01:25:15.072256 containerd[1624]: time="2026-01-28T01:25:15.070528926Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59889c77b-9msjb,Uid:63a937f8-f218-45d1-87c6-b75ad5fcad55,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e01e567a816190e08b366561913a69b03ab6ce9703f2749761ffe4d86caa950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:15.085884 kubelet[2995]: E0128 01:25:15.084878 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e01e567a816190e08b366561913a69b03ab6ce9703f2749761ffe4d86caa950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:15.085884 kubelet[2995]: E0128 01:25:15.084947 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e01e567a816190e08b366561913a69b03ab6ce9703f2749761ffe4d86caa950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" Jan 28 01:25:15.085884 kubelet[2995]: E0128 01:25:15.084974 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e01e567a816190e08b366561913a69b03ab6ce9703f2749761ffe4d86caa950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" Jan 28 01:25:15.090915 kubelet[2995]: E0128 01:25:15.085287 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59889c77b-9msjb_calico-apiserver(63a937f8-f218-45d1-87c6-b75ad5fcad55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59889c77b-9msjb_calico-apiserver(63a937f8-f218-45d1-87c6-b75ad5fcad55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e01e567a816190e08b366561913a69b03ab6ce9703f2749761ffe4d86caa950\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:25:15.269421 containerd[1624]: time="2026-01-28T01:25:15.269191931Z" level=error msg="Failed to destroy network for sandbox \"a938376a9691325d6dc04e595a1fccec0b46ff135d6d67cecdcfee4bced93b54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:15.382941 systemd[1]: run-netns-cni\x2d5e5986b9\x2da661\x2d531e\x2d7f0e\x2d8d0d8b3dab9d.mount: Deactivated successfully. Jan 28 01:25:15.523579 containerd[1624]: time="2026-01-28T01:25:15.518713235Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fh6hq,Uid:acadd2db-d3cd-417f-93e7-6a9e249c8d3d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a938376a9691325d6dc04e595a1fccec0b46ff135d6d67cecdcfee4bced93b54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:15.533914 kubelet[2995]: E0128 01:25:15.529482 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a938376a9691325d6dc04e595a1fccec0b46ff135d6d67cecdcfee4bced93b54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:15.543753 kubelet[2995]: E0128 01:25:15.543707 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a938376a9691325d6dc04e595a1fccec0b46ff135d6d67cecdcfee4bced93b54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fh6hq" Jan 28 01:25:15.566559 kubelet[2995]: E0128 01:25:15.543873 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a938376a9691325d6dc04e595a1fccec0b46ff135d6d67cecdcfee4bced93b54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fh6hq" Jan 28 01:25:15.566559 kubelet[2995]: E0128 01:25:15.566343 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fh6hq_kube-system(acadd2db-d3cd-417f-93e7-6a9e249c8d3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fh6hq_kube-system(acadd2db-d3cd-417f-93e7-6a9e249c8d3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a938376a9691325d6dc04e595a1fccec0b46ff135d6d67cecdcfee4bced93b54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fh6hq" podUID="acadd2db-d3cd-417f-93e7-6a9e249c8d3d" Jan 28 01:25:15.971641 containerd[1624]: time="2026-01-28T01:25:15.971575061Z" level=error msg="Failed to destroy network for sandbox \"519a62927c03a5e358706370fd418133321455be0b81e61e162a540113e625d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:15.987790 systemd[1]: run-netns-cni\x2d82fb67c9\x2d1c16\x2d549d\x2d8c1c\x2dd9e2fa7ad5de.mount: Deactivated successfully. Jan 28 01:25:16.124271 containerd[1624]: time="2026-01-28T01:25:16.123337485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kgc2v,Uid:845c6024-31b8-4f74-be49-c76c18f222f2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"519a62927c03a5e358706370fd418133321455be0b81e61e162a540113e625d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:16.125855 kubelet[2995]: E0128 01:25:16.125566 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"519a62927c03a5e358706370fd418133321455be0b81e61e162a540113e625d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:16.126628 kubelet[2995]: E0128 01:25:16.125937 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"519a62927c03a5e358706370fd418133321455be0b81e61e162a540113e625d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kgc2v" Jan 28 01:25:16.126628 kubelet[2995]: E0128 01:25:16.125974 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"519a62927c03a5e358706370fd418133321455be0b81e61e162a540113e625d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kgc2v" Jan 28 01:25:16.132319 kubelet[2995]: E0128 01:25:16.132235 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kgc2v_calico-system(845c6024-31b8-4f74-be49-c76c18f222f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kgc2v_calico-system(845c6024-31b8-4f74-be49-c76c18f222f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"519a62927c03a5e358706370fd418133321455be0b81e61e162a540113e625d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:25:21.078589 kubelet[2995]: E0128 01:25:21.071943 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:21.082315 containerd[1624]: time="2026-01-28T01:25:21.080560900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qs5tp,Uid:28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9,Namespace:kube-system,Attempt:0,}" Jan 28 01:25:21.461554 containerd[1624]: time="2026-01-28T01:25:21.461204946Z" level=error msg="Failed to destroy network for sandbox \"2889e69d46a73d37bc3ac063b23dddfb8d9711d59601063f1f5a987199c83912\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:21.474783 systemd[1]: run-netns-cni\x2d6100665a\x2d49fe\x2d0203\x2de5fe\x2d56d904c950f5.mount: Deactivated successfully. Jan 28 01:25:21.495250 containerd[1624]: time="2026-01-28T01:25:21.493903614Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qs5tp,Uid:28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2889e69d46a73d37bc3ac063b23dddfb8d9711d59601063f1f5a987199c83912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:21.498328 kubelet[2995]: E0128 01:25:21.496876 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2889e69d46a73d37bc3ac063b23dddfb8d9711d59601063f1f5a987199c83912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:21.498328 kubelet[2995]: E0128 01:25:21.496958 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2889e69d46a73d37bc3ac063b23dddfb8d9711d59601063f1f5a987199c83912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qs5tp" Jan 28 01:25:21.498328 kubelet[2995]: E0128 01:25:21.496991 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2889e69d46a73d37bc3ac063b23dddfb8d9711d59601063f1f5a987199c83912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qs5tp" Jan 28 01:25:21.498537 kubelet[2995]: E0128 01:25:21.497285 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qs5tp_kube-system(28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qs5tp_kube-system(28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2889e69d46a73d37bc3ac063b23dddfb8d9711d59601063f1f5a987199c83912\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qs5tp" podUID="28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9" Jan 28 01:25:25.093782 containerd[1624]: time="2026-01-28T01:25:25.093724281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59889c77b-c5nrl,Uid:b1cfce8a-501a-4088-a990-12172f5320b3,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:25:25.106781 containerd[1624]: time="2026-01-28T01:25:25.094539441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f6f76b88b-kbkfp,Uid:77b9f804-ee0a-4a45-895f-6d2585222c51,Namespace:calico-system,Attempt:0,}" Jan 28 01:25:26.107529 containerd[1624]: time="2026-01-28T01:25:26.103884273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d85994946-hw4kg,Uid:2c2d5c47-2f4c-4dc2-af4d-d250680defb0,Namespace:calico-system,Attempt:0,}" Jan 28 01:25:26.459705 containerd[1624]: time="2026-01-28T01:25:26.458733516Z" level=error msg="Failed to destroy network for sandbox \"f7af22404c1fdb78ff346bd047b74d0182b8f28c0a1bba16108c931d6302edc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:26.487698 systemd[1]: run-netns-cni\x2d2f4adbc6\x2dae59\x2d0999\x2d4680\x2d27bc2bb9f208.mount: Deactivated successfully. Jan 28 01:25:26.508637 containerd[1624]: time="2026-01-28T01:25:26.508564337Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59889c77b-c5nrl,Uid:b1cfce8a-501a-4088-a990-12172f5320b3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7af22404c1fdb78ff346bd047b74d0182b8f28c0a1bba16108c931d6302edc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:26.510573 kubelet[2995]: E0128 01:25:26.509790 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7af22404c1fdb78ff346bd047b74d0182b8f28c0a1bba16108c931d6302edc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:26.510573 kubelet[2995]: E0128 01:25:26.509874 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7af22404c1fdb78ff346bd047b74d0182b8f28c0a1bba16108c931d6302edc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" Jan 28 01:25:26.510573 kubelet[2995]: E0128 01:25:26.509906 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7af22404c1fdb78ff346bd047b74d0182b8f28c0a1bba16108c931d6302edc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" Jan 28 01:25:26.518164 kubelet[2995]: E0128 01:25:26.509968 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59889c77b-c5nrl_calico-apiserver(b1cfce8a-501a-4088-a990-12172f5320b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59889c77b-c5nrl_calico-apiserver(b1cfce8a-501a-4088-a990-12172f5320b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7af22404c1fdb78ff346bd047b74d0182b8f28c0a1bba16108c931d6302edc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:25:26.624704 containerd[1624]: time="2026-01-28T01:25:26.621607515Z" level=error msg="Failed to destroy network for sandbox \"358326eddf47e6b4af825c6d445b05c2deb63f70ec68554b878492478562aaf6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:26.637451 systemd[1]: run-netns-cni\x2d0c515ef9\x2dbd91\x2d33c3\x2d09e2\x2dca0dcf0d4ce4.mount: Deactivated successfully. Jan 28 01:25:26.666590 containerd[1624]: time="2026-01-28T01:25:26.663233032Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f6f76b88b-kbkfp,Uid:77b9f804-ee0a-4a45-895f-6d2585222c51,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"358326eddf47e6b4af825c6d445b05c2deb63f70ec68554b878492478562aaf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:26.669151 kubelet[2995]: E0128 01:25:26.668464 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"358326eddf47e6b4af825c6d445b05c2deb63f70ec68554b878492478562aaf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:26.669151 kubelet[2995]: E0128 01:25:26.668550 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"358326eddf47e6b4af825c6d445b05c2deb63f70ec68554b878492478562aaf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f6f76b88b-kbkfp" Jan 28 01:25:26.669151 kubelet[2995]: E0128 01:25:26.668582 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"358326eddf47e6b4af825c6d445b05c2deb63f70ec68554b878492478562aaf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f6f76b88b-kbkfp" Jan 28 01:25:26.671230 kubelet[2995]: E0128 01:25:26.668650 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f6f76b88b-kbkfp_calico-system(77b9f804-ee0a-4a45-895f-6d2585222c51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f6f76b88b-kbkfp_calico-system(77b9f804-ee0a-4a45-895f-6d2585222c51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"358326eddf47e6b4af825c6d445b05c2deb63f70ec68554b878492478562aaf6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f6f76b88b-kbkfp" podUID="77b9f804-ee0a-4a45-895f-6d2585222c51" Jan 28 01:25:27.017175 containerd[1624]: time="2026-01-28T01:25:27.012866555Z" level=error msg="Failed to destroy network for sandbox \"e08cafbe3d87fe9773986186ad96214046cf1477d9558127e1751cddd59d306b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:27.028307 systemd[1]: run-netns-cni\x2d8ee2cb8a\x2d51f4\x2dfd39\x2dc8e9\x2dfc4f5043751a.mount: Deactivated successfully. Jan 28 01:25:27.031559 containerd[1624]: time="2026-01-28T01:25:27.031501116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d85994946-hw4kg,Uid:2c2d5c47-2f4c-4dc2-af4d-d250680defb0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e08cafbe3d87fe9773986186ad96214046cf1477d9558127e1751cddd59d306b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:27.048369 kubelet[2995]: E0128 01:25:27.034513 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e08cafbe3d87fe9773986186ad96214046cf1477d9558127e1751cddd59d306b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:27.048369 kubelet[2995]: E0128 01:25:27.034599 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e08cafbe3d87fe9773986186ad96214046cf1477d9558127e1751cddd59d306b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" Jan 28 01:25:27.048369 kubelet[2995]: E0128 01:25:27.034629 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e08cafbe3d87fe9773986186ad96214046cf1477d9558127e1751cddd59d306b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" Jan 28 01:25:27.049338 kubelet[2995]: E0128 01:25:27.034698 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d85994946-hw4kg_calico-system(2c2d5c47-2f4c-4dc2-af4d-d250680defb0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d85994946-hw4kg_calico-system(2c2d5c47-2f4c-4dc2-af4d-d250680defb0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e08cafbe3d87fe9773986186ad96214046cf1477d9558127e1751cddd59d306b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:25:27.065176 kubelet[2995]: E0128 01:25:27.063375 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:27.069661 containerd[1624]: time="2026-01-28T01:25:27.068588086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59889c77b-9msjb,Uid:63a937f8-f218-45d1-87c6-b75ad5fcad55,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:25:27.070175 containerd[1624]: time="2026-01-28T01:25:27.070143606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fh6hq,Uid:acadd2db-d3cd-417f-93e7-6a9e249c8d3d,Namespace:kube-system,Attempt:0,}" Jan 28 01:25:28.271708 containerd[1624]: time="2026-01-28T01:25:28.271641161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g77nq,Uid:8c19397c-299f-4305-bb7b-810de8e940fe,Namespace:calico-system,Attempt:0,}" Jan 28 01:25:28.353800 containerd[1624]: time="2026-01-28T01:25:28.353619335Z" level=error msg="Failed to destroy network for sandbox \"c62d83d7c088a4a29df8f82e3ca502d79689709326fd062401e78e3d055a04a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:28.369730 systemd[1]: run-netns-cni\x2dd5f45af5\x2dd425\x2d9a65\x2d8ba9\x2d9634aae3a2c7.mount: Deactivated successfully. Jan 28 01:25:28.371817 containerd[1624]: time="2026-01-28T01:25:28.371479733Z" level=error msg="Failed to destroy network for sandbox \"ed79465c2484c26e4c9e6a58de75eeaade8a783d7d20ce94e0c768bd904e1dc7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:28.394453 systemd[1]: run-netns-cni\x2dcc12af54\x2d5476\x2d97d1\x2dd3a1\x2d013e1245ecb6.mount: Deactivated successfully. Jan 28 01:25:28.397952 kubelet[2995]: E0128 01:25:28.395638 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c62d83d7c088a4a29df8f82e3ca502d79689709326fd062401e78e3d055a04a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:28.397952 kubelet[2995]: E0128 01:25:28.395720 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c62d83d7c088a4a29df8f82e3ca502d79689709326fd062401e78e3d055a04a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fh6hq" Jan 28 01:25:28.397952 kubelet[2995]: E0128 01:25:28.395748 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c62d83d7c088a4a29df8f82e3ca502d79689709326fd062401e78e3d055a04a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fh6hq" Jan 28 01:25:28.398585 containerd[1624]: time="2026-01-28T01:25:28.394493531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fh6hq,Uid:acadd2db-d3cd-417f-93e7-6a9e249c8d3d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c62d83d7c088a4a29df8f82e3ca502d79689709326fd062401e78e3d055a04a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:28.398738 kubelet[2995]: E0128 01:25:28.395816 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fh6hq_kube-system(acadd2db-d3cd-417f-93e7-6a9e249c8d3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fh6hq_kube-system(acadd2db-d3cd-417f-93e7-6a9e249c8d3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c62d83d7c088a4a29df8f82e3ca502d79689709326fd062401e78e3d055a04a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fh6hq" podUID="acadd2db-d3cd-417f-93e7-6a9e249c8d3d" Jan 28 01:25:28.401247 containerd[1624]: time="2026-01-28T01:25:28.400942480Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59889c77b-9msjb,Uid:63a937f8-f218-45d1-87c6-b75ad5fcad55,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed79465c2484c26e4c9e6a58de75eeaade8a783d7d20ce94e0c768bd904e1dc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:28.411584 kubelet[2995]: E0128 01:25:28.402315 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed79465c2484c26e4c9e6a58de75eeaade8a783d7d20ce94e0c768bd904e1dc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:28.411584 kubelet[2995]: E0128 01:25:28.402625 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed79465c2484c26e4c9e6a58de75eeaade8a783d7d20ce94e0c768bd904e1dc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" Jan 28 01:25:28.411584 kubelet[2995]: E0128 01:25:28.402882 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed79465c2484c26e4c9e6a58de75eeaade8a783d7d20ce94e0c768bd904e1dc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" Jan 28 01:25:28.418483 kubelet[2995]: E0128 01:25:28.403578 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59889c77b-9msjb_calico-apiserver(63a937f8-f218-45d1-87c6-b75ad5fcad55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59889c77b-9msjb_calico-apiserver(63a937f8-f218-45d1-87c6-b75ad5fcad55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed79465c2484c26e4c9e6a58de75eeaade8a783d7d20ce94e0c768bd904e1dc7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:25:29.043992 containerd[1624]: time="2026-01-28T01:25:29.039878581Z" level=error msg="Failed to destroy network for sandbox \"f9a36545b00ba572bc30467647ae677b7347710a8e369dda7a923cb982fc1f4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:29.051903 systemd[1]: run-netns-cni\x2db730968d\x2dd295\x2dc859\x2d34d5\x2d987013374df7.mount: Deactivated successfully. Jan 28 01:25:29.080555 containerd[1624]: time="2026-01-28T01:25:29.079362908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kgc2v,Uid:845c6024-31b8-4f74-be49-c76c18f222f2,Namespace:calico-system,Attempt:0,}" Jan 28 01:25:29.271233 containerd[1624]: time="2026-01-28T01:25:29.267452779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g77nq,Uid:8c19397c-299f-4305-bb7b-810de8e940fe,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a36545b00ba572bc30467647ae677b7347710a8e369dda7a923cb982fc1f4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:29.275311 kubelet[2995]: E0128 01:25:29.269585 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a36545b00ba572bc30467647ae677b7347710a8e369dda7a923cb982fc1f4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:29.275311 kubelet[2995]: E0128 01:25:29.269765 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a36545b00ba572bc30467647ae677b7347710a8e369dda7a923cb982fc1f4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-g77nq" Jan 28 01:25:29.275311 kubelet[2995]: E0128 01:25:29.269803 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a36545b00ba572bc30467647ae677b7347710a8e369dda7a923cb982fc1f4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-g77nq" Jan 28 01:25:29.278168 kubelet[2995]: E0128 01:25:29.269943 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-g77nq_calico-system(8c19397c-299f-4305-bb7b-810de8e940fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-g77nq_calico-system(8c19397c-299f-4305-bb7b-810de8e940fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9a36545b00ba572bc30467647ae677b7347710a8e369dda7a923cb982fc1f4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:25:29.867167 containerd[1624]: time="2026-01-28T01:25:29.859565074Z" level=error msg="Failed to destroy network for sandbox \"3c43d8d1d432a9aa412abe574d25c8ec6e3a818581807ef323b9c0ea5f7a8bb4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:29.866887 systemd[1]: run-netns-cni\x2d200c6774\x2d3291\x2da61d\x2d52e5\x2d6a559b4c55b9.mount: Deactivated successfully. Jan 28 01:25:29.891515 containerd[1624]: time="2026-01-28T01:25:29.891165027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kgc2v,Uid:845c6024-31b8-4f74-be49-c76c18f222f2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c43d8d1d432a9aa412abe574d25c8ec6e3a818581807ef323b9c0ea5f7a8bb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:29.892304 kubelet[2995]: E0128 01:25:29.891900 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c43d8d1d432a9aa412abe574d25c8ec6e3a818581807ef323b9c0ea5f7a8bb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:29.893263 kubelet[2995]: E0128 01:25:29.891990 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c43d8d1d432a9aa412abe574d25c8ec6e3a818581807ef323b9c0ea5f7a8bb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kgc2v" Jan 28 01:25:29.893263 kubelet[2995]: E0128 01:25:29.892895 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c43d8d1d432a9aa412abe574d25c8ec6e3a818581807ef323b9c0ea5f7a8bb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kgc2v" Jan 28 01:25:29.893263 kubelet[2995]: E0128 01:25:29.892990 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kgc2v_calico-system(845c6024-31b8-4f74-be49-c76c18f222f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kgc2v_calico-system(845c6024-31b8-4f74-be49-c76c18f222f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c43d8d1d432a9aa412abe574d25c8ec6e3a818581807ef323b9c0ea5f7a8bb4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:25:30.061417 kubelet[2995]: E0128 01:25:30.058889 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:30.075660 kubelet[2995]: E0128 01:25:30.069935 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:31.063879 kubelet[2995]: E0128 01:25:31.061229 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:36.064623 kubelet[2995]: E0128 01:25:36.062277 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:36.335782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2139311927.mount: Deactivated successfully. Jan 28 01:25:36.472612 containerd[1624]: time="2026-01-28T01:25:36.472329412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:25:36.482981 containerd[1624]: time="2026-01-28T01:25:36.482891992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 28 01:25:36.495398 containerd[1624]: time="2026-01-28T01:25:36.493466954Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:25:36.502257 containerd[1624]: time="2026-01-28T01:25:36.502187283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:25:36.503168 containerd[1624]: time="2026-01-28T01:25:36.502913903Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 55.441253193s" Jan 28 01:25:36.507142 containerd[1624]: time="2026-01-28T01:25:36.505224437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 28 01:25:36.607253 containerd[1624]: time="2026-01-28T01:25:36.601886079Z" level=info msg="CreateContainer within sandbox \"f51c35e06fd03b0092d6c6d4c751a427c78b1d590f68d3abde526efe0dfc1374\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 28 01:25:36.650604 containerd[1624]: time="2026-01-28T01:25:36.648920786Z" level=info msg="Container 037e8856f9539f3882089780f9542d72a06b2ca94ea942068f46fb690501af79: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:25:36.699130 containerd[1624]: time="2026-01-28T01:25:36.693595372Z" level=info msg="CreateContainer within sandbox \"f51c35e06fd03b0092d6c6d4c751a427c78b1d590f68d3abde526efe0dfc1374\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"037e8856f9539f3882089780f9542d72a06b2ca94ea942068f46fb690501af79\"" Jan 28 01:25:36.699130 containerd[1624]: time="2026-01-28T01:25:36.695228072Z" level=info msg="StartContainer for \"037e8856f9539f3882089780f9542d72a06b2ca94ea942068f46fb690501af79\"" Jan 28 01:25:36.709244 containerd[1624]: time="2026-01-28T01:25:36.706952685Z" level=info msg="connecting to shim 037e8856f9539f3882089780f9542d72a06b2ca94ea942068f46fb690501af79" address="unix:///run/containerd/s/89b44edf8a2cbecd70ae4545d12b2340686ca0e79aeb70444693e5c66af3a95d" protocol=ttrpc version=3 Jan 28 01:25:37.019250 systemd[1]: Started cri-containerd-037e8856f9539f3882089780f9542d72a06b2ca94ea942068f46fb690501af79.scope - libcontainer container 037e8856f9539f3882089780f9542d72a06b2ca94ea942068f46fb690501af79. Jan 28 01:25:37.061138 kubelet[2995]: E0128 01:25:37.060391 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:37.068875 containerd[1624]: time="2026-01-28T01:25:37.068620892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qs5tp,Uid:28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9,Namespace:kube-system,Attempt:0,}" Jan 28 01:25:37.402000 audit: BPF prog-id=172 op=LOAD Jan 28 01:25:37.402000 audit[4864]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3545 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:37.472878 kernel: audit: type=1334 audit(1769563537.402:601): prog-id=172 op=LOAD Jan 28 01:25:37.473308 kernel: audit: type=1300 audit(1769563537.402:601): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3545 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:37.473381 kernel: audit: type=1327 audit(1769563537.402:601): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033376538383536663935333966333838323038393738306639353432 Jan 28 01:25:37.402000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033376538383536663935333966333838323038393738306639353432 Jan 28 01:25:37.406000 audit: BPF prog-id=173 op=LOAD Jan 28 01:25:37.568812 kernel: audit: type=1334 audit(1769563537.406:602): prog-id=173 op=LOAD Jan 28 01:25:37.406000 audit[4864]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3545 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:37.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033376538383536663935333966333838323038393738306639353432 Jan 28 01:25:37.624189 kernel: audit: type=1300 audit(1769563537.406:602): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3545 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:37.624444 kernel: audit: type=1327 audit(1769563537.406:602): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033376538383536663935333966333838323038393738306639353432 Jan 28 01:25:37.624493 kernel: audit: type=1334 audit(1769563537.406:603): prog-id=173 op=UNLOAD Jan 28 01:25:37.406000 audit: BPF prog-id=173 op=UNLOAD Jan 28 01:25:37.406000 audit[4864]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3545 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:37.670210 kernel: audit: type=1300 audit(1769563537.406:603): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3545 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:37.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033376538383536663935333966333838323038393738306639353432 Jan 28 01:25:37.714964 kernel: audit: type=1327 audit(1769563537.406:603): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033376538383536663935333966333838323038393738306639353432 Jan 28 01:25:37.715210 kernel: audit: type=1334 audit(1769563537.406:604): prog-id=172 op=UNLOAD Jan 28 01:25:37.406000 audit: BPF prog-id=172 op=UNLOAD Jan 28 01:25:37.406000 audit[4864]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3545 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:37.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033376538383536663935333966333838323038393738306639353432 Jan 28 01:25:37.406000 audit: BPF prog-id=174 op=LOAD Jan 28 01:25:37.406000 audit[4864]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3545 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:37.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033376538383536663935333966333838323038393738306639353432 Jan 28 01:25:37.811242 containerd[1624]: time="2026-01-28T01:25:37.803992975Z" level=info msg="StartContainer for \"037e8856f9539f3882089780f9542d72a06b2ca94ea942068f46fb690501af79\" returns successfully" Jan 28 01:25:37.921298 containerd[1624]: time="2026-01-28T01:25:37.919915235Z" level=error msg="Failed to destroy network for sandbox \"c4f45cec63e712627dda633dc062f56eedcc28ebc769872e267e421cb09382fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:37.953983 systemd[1]: run-netns-cni\x2d284591ed\x2d8cca\x2d6b96\x2d7a5d\x2d39a66135f861.mount: Deactivated successfully. Jan 28 01:25:37.982157 containerd[1624]: time="2026-01-28T01:25:37.981492174Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qs5tp,Uid:28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f45cec63e712627dda633dc062f56eedcc28ebc769872e267e421cb09382fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:37.983218 kubelet[2995]: E0128 01:25:37.982903 2995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f45cec63e712627dda633dc062f56eedcc28ebc769872e267e421cb09382fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:37.994848 kubelet[2995]: E0128 01:25:37.993905 2995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f45cec63e712627dda633dc062f56eedcc28ebc769872e267e421cb09382fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qs5tp" Jan 28 01:25:37.994848 kubelet[2995]: E0128 01:25:37.993967 2995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f45cec63e712627dda633dc062f56eedcc28ebc769872e267e421cb09382fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qs5tp" Jan 28 01:25:37.994848 kubelet[2995]: E0128 01:25:37.994399 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qs5tp_kube-system(28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qs5tp_kube-system(28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4f45cec63e712627dda633dc062f56eedcc28ebc769872e267e421cb09382fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qs5tp" podUID="28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9" Jan 28 01:25:38.173758 kubelet[2995]: E0128 01:25:38.173704 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:38.277763 kubelet[2995]: I0128 01:25:38.276198 2995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-46chs" podStartSLOduration=3.7332960809999998 podStartE2EDuration="1m49.276174993s" podCreationTimestamp="2026-01-28 01:23:49 +0000 UTC" firstStartedPulling="2026-01-28 01:23:50.965846467 +0000 UTC m=+59.634654112" lastFinishedPulling="2026-01-28 01:25:36.508725379 +0000 UTC m=+165.177533024" observedRunningTime="2026-01-28 01:25:38.271492213 +0000 UTC m=+166.940299868" watchObservedRunningTime="2026-01-28 01:25:38.276174993 +0000 UTC m=+166.944982639" Jan 28 01:25:38.578938 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 28 01:25:38.579234 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 28 01:25:39.071444 containerd[1624]: time="2026-01-28T01:25:39.071269478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59889c77b-c5nrl,Uid:b1cfce8a-501a-4088-a990-12172f5320b3,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:25:39.235694 kubelet[2995]: E0128 01:25:39.233981 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:39.456158 kubelet[2995]: I0128 01:25:39.455826 2995 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77b9f804-ee0a-4a45-895f-6d2585222c51-whisker-ca-bundle\") pod \"77b9f804-ee0a-4a45-895f-6d2585222c51\" (UID: \"77b9f804-ee0a-4a45-895f-6d2585222c51\") " Jan 28 01:25:39.456158 kubelet[2995]: I0128 01:25:39.455910 2995 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24bl6\" (UniqueName: \"kubernetes.io/projected/77b9f804-ee0a-4a45-895f-6d2585222c51-kube-api-access-24bl6\") pod \"77b9f804-ee0a-4a45-895f-6d2585222c51\" (UID: \"77b9f804-ee0a-4a45-895f-6d2585222c51\") " Jan 28 01:25:39.456158 kubelet[2995]: I0128 01:25:39.455941 2995 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/77b9f804-ee0a-4a45-895f-6d2585222c51-whisker-backend-key-pair\") pod \"77b9f804-ee0a-4a45-895f-6d2585222c51\" (UID: \"77b9f804-ee0a-4a45-895f-6d2585222c51\") " Jan 28 01:25:39.508911 kubelet[2995]: I0128 01:25:39.494682 2995 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77b9f804-ee0a-4a45-895f-6d2585222c51-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "77b9f804-ee0a-4a45-895f-6d2585222c51" (UID: "77b9f804-ee0a-4a45-895f-6d2585222c51"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 01:25:39.509622 systemd[1]: var-lib-kubelet-pods-77b9f804\x2dee0a\x2d4a45\x2d895f\x2d6d2585222c51-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 28 01:25:39.531499 kubelet[2995]: I0128 01:25:39.528430 2995 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b9f804-ee0a-4a45-895f-6d2585222c51-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "77b9f804-ee0a-4a45-895f-6d2585222c51" (UID: "77b9f804-ee0a-4a45-895f-6d2585222c51"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 01:25:39.528498 systemd[1]: var-lib-kubelet-pods-77b9f804\x2dee0a\x2d4a45\x2d895f\x2d6d2585222c51-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d24bl6.mount: Deactivated successfully. Jan 28 01:25:39.534150 kubelet[2995]: I0128 01:25:39.530303 2995 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77b9f804-ee0a-4a45-895f-6d2585222c51-kube-api-access-24bl6" (OuterVolumeSpecName: "kube-api-access-24bl6") pod "77b9f804-ee0a-4a45-895f-6d2585222c51" (UID: "77b9f804-ee0a-4a45-895f-6d2585222c51"). InnerVolumeSpecName "kube-api-access-24bl6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 01:25:39.583144 kubelet[2995]: I0128 01:25:39.582723 2995 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-24bl6\" (UniqueName: \"kubernetes.io/projected/77b9f804-ee0a-4a45-895f-6d2585222c51-kube-api-access-24bl6\") on node \"localhost\" DevicePath \"\"" Jan 28 01:25:39.583144 kubelet[2995]: I0128 01:25:39.582791 2995 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/77b9f804-ee0a-4a45-895f-6d2585222c51-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 28 01:25:39.583144 kubelet[2995]: I0128 01:25:39.582812 2995 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77b9f804-ee0a-4a45-895f-6d2585222c51-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 28 01:25:40.722346 kubelet[2995]: E0128 01:25:40.709260 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:40.724148 containerd[1624]: time="2026-01-28T01:25:40.721809987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fh6hq,Uid:acadd2db-d3cd-417f-93e7-6a9e249c8d3d,Namespace:kube-system,Attempt:0,}" Jan 28 01:25:40.877599 systemd[1]: Removed slice kubepods-besteffort-pod77b9f804_ee0a_4a45_895f_6d2585222c51.slice - libcontainer container kubepods-besteffort-pod77b9f804_ee0a_4a45_895f_6d2585222c51.slice. Jan 28 01:25:41.498319 containerd[1624]: time="2026-01-28T01:25:41.490699751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g77nq,Uid:8c19397c-299f-4305-bb7b-810de8e940fe,Namespace:calico-system,Attempt:0,}" Jan 28 01:25:41.512671 containerd[1624]: time="2026-01-28T01:25:41.508740945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kgc2v,Uid:845c6024-31b8-4f74-be49-c76c18f222f2,Namespace:calico-system,Attempt:0,}" Jan 28 01:25:41.512671 containerd[1624]: time="2026-01-28T01:25:41.508917324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d85994946-hw4kg,Uid:2c2d5c47-2f4c-4dc2-af4d-d250680defb0,Namespace:calico-system,Attempt:0,}" Jan 28 01:25:48.655448 containerd[1624]: time="2026-01-28T01:25:48.652524209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59889c77b-9msjb,Uid:63a937f8-f218-45d1-87c6-b75ad5fcad55,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:25:48.701298 kubelet[2995]: E0128 01:25:48.695849 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:49.168447 kubelet[2995]: E0128 01:25:49.162975 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:49.217688 containerd[1624]: time="2026-01-28T01:25:49.192743240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qs5tp,Uid:28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9,Namespace:kube-system,Attempt:0,}" Jan 28 01:25:49.250850 kubelet[2995]: I0128 01:25:49.250750 2995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77b9f804-ee0a-4a45-895f-6d2585222c51" path="/var/lib/kubelet/pods/77b9f804-ee0a-4a45-895f-6d2585222c51/volumes" Jan 28 01:25:49.463212 systemd[1]: Created slice kubepods-besteffort-podfe814487_9d96_47c2_af16_f4ab9eb63844.slice - libcontainer container kubepods-besteffort-podfe814487_9d96_47c2_af16_f4ab9eb63844.slice. Jan 28 01:25:49.476319 kubelet[2995]: I0128 01:25:49.476269 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lwwl\" (UniqueName: \"kubernetes.io/projected/fe814487-9d96-47c2-af16-f4ab9eb63844-kube-api-access-5lwwl\") pod \"whisker-b57dd5bd-wgdx5\" (UID: \"fe814487-9d96-47c2-af16-f4ab9eb63844\") " pod="calico-system/whisker-b57dd5bd-wgdx5" Jan 28 01:25:49.476584 kubelet[2995]: I0128 01:25:49.476559 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fe814487-9d96-47c2-af16-f4ab9eb63844-whisker-backend-key-pair\") pod \"whisker-b57dd5bd-wgdx5\" (UID: \"fe814487-9d96-47c2-af16-f4ab9eb63844\") " pod="calico-system/whisker-b57dd5bd-wgdx5" Jan 28 01:25:49.476857 kubelet[2995]: I0128 01:25:49.476694 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe814487-9d96-47c2-af16-f4ab9eb63844-whisker-ca-bundle\") pod \"whisker-b57dd5bd-wgdx5\" (UID: \"fe814487-9d96-47c2-af16-f4ab9eb63844\") " pod="calico-system/whisker-b57dd5bd-wgdx5" Jan 28 01:25:49.485268 containerd[1624]: time="2026-01-28T01:25:49.485200806Z" level=error msg="ExecSync for \"037e8856f9539f3882089780f9542d72a06b2ca94ea942068f46fb690501af79\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Jan 28 01:25:49.491184 kubelet[2995]: E0128 01:25:49.491128 2995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="037e8856f9539f3882089780f9542d72a06b2ca94ea942068f46fb690501af79" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jan 28 01:25:49.797515 containerd[1624]: time="2026-01-28T01:25:49.797334785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b57dd5bd-wgdx5,Uid:fe814487-9d96-47c2-af16-f4ab9eb63844,Namespace:calico-system,Attempt:0,}" Jan 28 01:25:51.957676 systemd-networkd[1503]: cali345d9ea93a7: Link UP Jan 28 01:25:51.985848 systemd-networkd[1503]: cali345d9ea93a7: Gained carrier Jan 28 01:25:52.157484 containerd[1624]: 2026-01-28 01:25:39.461 [INFO][4967] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:25:52.157484 containerd[1624]: 2026-01-28 01:25:40.835 [INFO][4967] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59889c77b--c5nrl-eth0 calico-apiserver-59889c77b- calico-apiserver b1cfce8a-501a-4088-a990-12172f5320b3 1092 0 2026-01-28 01:23:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59889c77b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59889c77b-c5nrl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali345d9ea93a7 [] [] }} ContainerID="d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3" Namespace="calico-apiserver" Pod="calico-apiserver-59889c77b-c5nrl" WorkloadEndpoint="localhost-k8s-calico--apiserver--59889c77b--c5nrl-" Jan 28 01:25:52.157484 containerd[1624]: 2026-01-28 01:25:40.877 [INFO][4967] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3" Namespace="calico-apiserver" Pod="calico-apiserver-59889c77b-c5nrl" WorkloadEndpoint="localhost-k8s-calico--apiserver--59889c77b--c5nrl-eth0" Jan 28 01:25:52.157484 containerd[1624]: 2026-01-28 01:25:51.197 [INFO][5047] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3" HandleID="k8s-pod-network.d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3" Workload="localhost-k8s-calico--apiserver--59889c77b--c5nrl-eth0" Jan 28 01:25:52.159384 containerd[1624]: 2026-01-28 01:25:51.200 [INFO][5047] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3" HandleID="k8s-pod-network.d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3" Workload="localhost-k8s-calico--apiserver--59889c77b--c5nrl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004c68d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59889c77b-c5nrl", "timestamp":"2026-01-28 01:25:51.197548097 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:25:52.159384 containerd[1624]: 2026-01-28 01:25:51.201 [INFO][5047] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:25:52.159384 containerd[1624]: 2026-01-28 01:25:51.204 [INFO][5047] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:25:52.159384 containerd[1624]: 2026-01-28 01:25:51.211 [INFO][5047] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:25:52.159384 containerd[1624]: 2026-01-28 01:25:51.423 [INFO][5047] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3" host="localhost" Jan 28 01:25:52.159384 containerd[1624]: 2026-01-28 01:25:51.529 [INFO][5047] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:25:52.159384 containerd[1624]: 2026-01-28 01:25:51.564 [INFO][5047] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:25:52.159384 containerd[1624]: 2026-01-28 01:25:51.570 [INFO][5047] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:25:52.159384 containerd[1624]: 2026-01-28 01:25:51.586 [INFO][5047] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:25:52.159384 containerd[1624]: 2026-01-28 01:25:51.586 [INFO][5047] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3" host="localhost" Jan 28 01:25:52.159928 containerd[1624]: 2026-01-28 01:25:51.615 [INFO][5047] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3 Jan 28 01:25:52.159928 containerd[1624]: 2026-01-28 01:25:51.659 [INFO][5047] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3" host="localhost" Jan 28 01:25:52.159928 containerd[1624]: 2026-01-28 01:25:51.707 [INFO][5047] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3" host="localhost" Jan 28 01:25:52.159928 containerd[1624]: 2026-01-28 01:25:51.707 [INFO][5047] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3" host="localhost" Jan 28 01:25:52.159928 containerd[1624]: 2026-01-28 01:25:51.707 [INFO][5047] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:25:52.159928 containerd[1624]: 2026-01-28 01:25:51.707 [INFO][5047] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3" HandleID="k8s-pod-network.d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3" Workload="localhost-k8s-calico--apiserver--59889c77b--c5nrl-eth0" Jan 28 01:25:52.160359 containerd[1624]: 2026-01-28 01:25:51.766 [INFO][4967] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3" Namespace="calico-apiserver" Pod="calico-apiserver-59889c77b-c5nrl" WorkloadEndpoint="localhost-k8s-calico--apiserver--59889c77b--c5nrl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59889c77b--c5nrl-eth0", GenerateName:"calico-apiserver-59889c77b-", Namespace:"calico-apiserver", SelfLink:"", UID:"b1cfce8a-501a-4088-a990-12172f5320b3", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59889c77b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59889c77b-c5nrl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali345d9ea93a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:52.160535 containerd[1624]: 2026-01-28 01:25:51.767 [INFO][4967] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3" Namespace="calico-apiserver" Pod="calico-apiserver-59889c77b-c5nrl" WorkloadEndpoint="localhost-k8s-calico--apiserver--59889c77b--c5nrl-eth0" Jan 28 01:25:52.160535 containerd[1624]: 2026-01-28 01:25:51.767 [INFO][4967] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali345d9ea93a7 ContainerID="d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3" Namespace="calico-apiserver" Pod="calico-apiserver-59889c77b-c5nrl" WorkloadEndpoint="localhost-k8s-calico--apiserver--59889c77b--c5nrl-eth0" Jan 28 01:25:52.160535 containerd[1624]: 2026-01-28 01:25:51.985 [INFO][4967] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3" Namespace="calico-apiserver" Pod="calico-apiserver-59889c77b-c5nrl" WorkloadEndpoint="localhost-k8s-calico--apiserver--59889c77b--c5nrl-eth0" Jan 28 01:25:52.160652 containerd[1624]: 2026-01-28 01:25:52.009 [INFO][4967] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3" Namespace="calico-apiserver" Pod="calico-apiserver-59889c77b-c5nrl" WorkloadEndpoint="localhost-k8s-calico--apiserver--59889c77b--c5nrl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59889c77b--c5nrl-eth0", GenerateName:"calico-apiserver-59889c77b-", Namespace:"calico-apiserver", SelfLink:"", UID:"b1cfce8a-501a-4088-a990-12172f5320b3", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59889c77b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3", Pod:"calico-apiserver-59889c77b-c5nrl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali345d9ea93a7", MAC:"06:51:89:23:e7:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:52.160831 containerd[1624]: 2026-01-28 01:25:52.124 [INFO][4967] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3" Namespace="calico-apiserver" Pod="calico-apiserver-59889c77b-c5nrl" WorkloadEndpoint="localhost-k8s-calico--apiserver--59889c77b--c5nrl-eth0" Jan 28 01:25:52.215916 systemd-networkd[1503]: cali23f86e8ee32: Link UP Jan 28 01:25:52.230216 systemd-networkd[1503]: cali23f86e8ee32: Gained carrier Jan 28 01:25:52.499985 containerd[1624]: 2026-01-28 01:25:49.823 [INFO][5053] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:25:52.499985 containerd[1624]: 2026-01-28 01:25:49.954 [INFO][5053] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59889c77b--9msjb-eth0 calico-apiserver-59889c77b- calico-apiserver 63a937f8-f218-45d1-87c6-b75ad5fcad55 1080 0 2026-01-28 01:23:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59889c77b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59889c77b-9msjb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali23f86e8ee32 [] [] }} ContainerID="ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70" Namespace="calico-apiserver" Pod="calico-apiserver-59889c77b-9msjb" WorkloadEndpoint="localhost-k8s-calico--apiserver--59889c77b--9msjb-" Jan 28 01:25:52.499985 containerd[1624]: 2026-01-28 01:25:49.954 [INFO][5053] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70" Namespace="calico-apiserver" Pod="calico-apiserver-59889c77b-9msjb" WorkloadEndpoint="localhost-k8s-calico--apiserver--59889c77b--9msjb-eth0" Jan 28 01:25:52.499985 containerd[1624]: 2026-01-28 01:25:51.224 [INFO][5130] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70" HandleID="k8s-pod-network.ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70" Workload="localhost-k8s-calico--apiserver--59889c77b--9msjb-eth0" Jan 28 01:25:52.500499 containerd[1624]: 2026-01-28 01:25:51.228 [INFO][5130] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70" HandleID="k8s-pod-network.ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70" Workload="localhost-k8s-calico--apiserver--59889c77b--9msjb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c1550), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59889c77b-9msjb", "timestamp":"2026-01-28 01:25:51.224589989 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:25:52.500499 containerd[1624]: 2026-01-28 01:25:51.233 [INFO][5130] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:25:52.500499 containerd[1624]: 2026-01-28 01:25:51.712 [INFO][5130] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:25:52.500499 containerd[1624]: 2026-01-28 01:25:51.712 [INFO][5130] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:25:52.500499 containerd[1624]: 2026-01-28 01:25:51.798 [INFO][5130] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70" host="localhost" Jan 28 01:25:52.500499 containerd[1624]: 2026-01-28 01:25:51.896 [INFO][5130] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:25:52.500499 containerd[1624]: 2026-01-28 01:25:51.957 [INFO][5130] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:25:52.500499 containerd[1624]: 2026-01-28 01:25:51.991 [INFO][5130] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:25:52.500499 containerd[1624]: 2026-01-28 01:25:52.027 [INFO][5130] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:25:52.500499 containerd[1624]: 2026-01-28 01:25:52.028 [INFO][5130] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70" host="localhost" Jan 28 01:25:52.501629 containerd[1624]: 2026-01-28 01:25:52.054 [INFO][5130] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70 Jan 28 01:25:52.501629 containerd[1624]: 2026-01-28 01:25:52.090 [INFO][5130] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70" host="localhost" Jan 28 01:25:52.501629 containerd[1624]: 2026-01-28 01:25:52.147 [INFO][5130] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70" host="localhost" Jan 28 01:25:52.501629 containerd[1624]: 2026-01-28 01:25:52.147 [INFO][5130] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70" host="localhost" Jan 28 01:25:52.501629 containerd[1624]: 2026-01-28 01:25:52.147 [INFO][5130] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:25:52.501629 containerd[1624]: 2026-01-28 01:25:52.156 [INFO][5130] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70" HandleID="k8s-pod-network.ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70" Workload="localhost-k8s-calico--apiserver--59889c77b--9msjb-eth0" Jan 28 01:25:52.501893 containerd[1624]: 2026-01-28 01:25:52.202 [INFO][5053] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70" Namespace="calico-apiserver" Pod="calico-apiserver-59889c77b-9msjb" WorkloadEndpoint="localhost-k8s-calico--apiserver--59889c77b--9msjb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59889c77b--9msjb-eth0", GenerateName:"calico-apiserver-59889c77b-", Namespace:"calico-apiserver", SelfLink:"", UID:"63a937f8-f218-45d1-87c6-b75ad5fcad55", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59889c77b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59889c77b-9msjb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali23f86e8ee32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:52.502176 containerd[1624]: 2026-01-28 01:25:52.205 [INFO][5053] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70" Namespace="calico-apiserver" Pod="calico-apiserver-59889c77b-9msjb" WorkloadEndpoint="localhost-k8s-calico--apiserver--59889c77b--9msjb-eth0" Jan 28 01:25:52.502176 containerd[1624]: 2026-01-28 01:25:52.210 [INFO][5053] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali23f86e8ee32 ContainerID="ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70" Namespace="calico-apiserver" Pod="calico-apiserver-59889c77b-9msjb" WorkloadEndpoint="localhost-k8s-calico--apiserver--59889c77b--9msjb-eth0" Jan 28 01:25:52.502176 containerd[1624]: 2026-01-28 01:25:52.265 [INFO][5053] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70" Namespace="calico-apiserver" Pod="calico-apiserver-59889c77b-9msjb" WorkloadEndpoint="localhost-k8s-calico--apiserver--59889c77b--9msjb-eth0" Jan 28 01:25:52.502309 containerd[1624]: 2026-01-28 01:25:52.267 [INFO][5053] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70" Namespace="calico-apiserver" Pod="calico-apiserver-59889c77b-9msjb" WorkloadEndpoint="localhost-k8s-calico--apiserver--59889c77b--9msjb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59889c77b--9msjb-eth0", GenerateName:"calico-apiserver-59889c77b-", Namespace:"calico-apiserver", SelfLink:"", UID:"63a937f8-f218-45d1-87c6-b75ad5fcad55", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59889c77b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70", Pod:"calico-apiserver-59889c77b-9msjb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali23f86e8ee32", MAC:"aa:55:cb:95:c5:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:52.502468 containerd[1624]: 2026-01-28 01:25:52.456 [INFO][5053] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70" Namespace="calico-apiserver" Pod="calico-apiserver-59889c77b-9msjb" WorkloadEndpoint="localhost-k8s-calico--apiserver--59889c77b--9msjb-eth0" Jan 28 01:25:52.828334 containerd[1624]: time="2026-01-28T01:25:52.819836975Z" level=info msg="connecting to shim ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70" address="unix:///run/containerd/s/38c457954c21b4402c661f207557b04bbf8fb341cbbc6bd97fb9b3e76a73e5b4" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:25:52.996627 containerd[1624]: time="2026-01-28T01:25:52.990402618Z" level=info msg="connecting to shim d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3" address="unix:///run/containerd/s/aac14da7d438557ca67d32967919d05de792efa1ed501398186a499ad22b7cfc" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:25:53.103840 systemd-networkd[1503]: cali5146205f6ff: Link UP Jan 28 01:25:53.127465 systemd-networkd[1503]: cali5146205f6ff: Gained carrier Jan 28 01:25:53.276612 containerd[1624]: 2026-01-28 01:25:49.456 [INFO][4991] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:25:53.276612 containerd[1624]: 2026-01-28 01:25:49.753 [INFO][4991] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--fh6hq-eth0 coredns-674b8bbfcf- kube-system acadd2db-d3cd-417f-93e7-6a9e249c8d3d 1089 0 2026-01-28 01:22:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-fh6hq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5146205f6ff [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26" Namespace="kube-system" Pod="coredns-674b8bbfcf-fh6hq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fh6hq-" Jan 28 01:25:53.276612 containerd[1624]: 2026-01-28 01:25:49.753 [INFO][4991] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26" Namespace="kube-system" Pod="coredns-674b8bbfcf-fh6hq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fh6hq-eth0" Jan 28 01:25:53.276612 containerd[1624]: 2026-01-28 01:25:51.241 [INFO][5097] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26" HandleID="k8s-pod-network.0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26" Workload="localhost-k8s-coredns--674b8bbfcf--fh6hq-eth0" Jan 28 01:25:53.280138 containerd[1624]: 2026-01-28 01:25:51.246 [INFO][5097] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26" HandleID="k8s-pod-network.0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26" Workload="localhost-k8s-coredns--674b8bbfcf--fh6hq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00048be50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-fh6hq", "timestamp":"2026-01-28 01:25:51.241777632 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:25:53.280138 containerd[1624]: 2026-01-28 01:25:51.246 [INFO][5097] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:25:53.280138 containerd[1624]: 2026-01-28 01:25:52.149 [INFO][5097] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:25:53.280138 containerd[1624]: 2026-01-28 01:25:52.149 [INFO][5097] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:25:53.280138 containerd[1624]: 2026-01-28 01:25:52.267 [INFO][5097] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26" host="localhost" Jan 28 01:25:53.280138 containerd[1624]: 2026-01-28 01:25:52.317 [INFO][5097] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:25:53.280138 containerd[1624]: 2026-01-28 01:25:52.401 [INFO][5097] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:25:53.280138 containerd[1624]: 2026-01-28 01:25:52.509 [INFO][5097] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:25:53.280138 containerd[1624]: 2026-01-28 01:25:52.646 [INFO][5097] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:25:53.280138 containerd[1624]: 2026-01-28 01:25:52.646 [INFO][5097] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26" host="localhost" Jan 28 01:25:53.287496 containerd[1624]: 2026-01-28 01:25:52.657 [INFO][5097] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26 Jan 28 01:25:53.287496 containerd[1624]: 2026-01-28 01:25:52.739 [INFO][5097] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26" host="localhost" Jan 28 01:25:53.287496 containerd[1624]: 2026-01-28 01:25:52.798 [INFO][5097] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26" host="localhost" Jan 28 01:25:53.287496 containerd[1624]: 2026-01-28 01:25:52.798 [INFO][5097] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26" host="localhost" Jan 28 01:25:53.287496 containerd[1624]: 2026-01-28 01:25:52.798 [INFO][5097] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:25:53.287496 containerd[1624]: 2026-01-28 01:25:52.798 [INFO][5097] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26" HandleID="k8s-pod-network.0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26" Workload="localhost-k8s-coredns--674b8bbfcf--fh6hq-eth0" Jan 28 01:25:53.287721 containerd[1624]: 2026-01-28 01:25:52.831 [INFO][4991] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26" Namespace="kube-system" Pod="coredns-674b8bbfcf-fh6hq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fh6hq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fh6hq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"acadd2db-d3cd-417f-93e7-6a9e249c8d3d", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-fh6hq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5146205f6ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:53.287918 containerd[1624]: 2026-01-28 01:25:52.853 [INFO][4991] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26" Namespace="kube-system" Pod="coredns-674b8bbfcf-fh6hq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fh6hq-eth0" Jan 28 01:25:53.287918 containerd[1624]: 2026-01-28 01:25:52.853 [INFO][4991] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5146205f6ff ContainerID="0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26" Namespace="kube-system" Pod="coredns-674b8bbfcf-fh6hq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fh6hq-eth0" Jan 28 01:25:53.287918 containerd[1624]: 2026-01-28 01:25:53.088 [INFO][4991] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26" Namespace="kube-system" Pod="coredns-674b8bbfcf-fh6hq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fh6hq-eth0" Jan 28 01:25:53.313376 containerd[1624]: 2026-01-28 01:25:53.088 [INFO][4991] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26" Namespace="kube-system" Pod="coredns-674b8bbfcf-fh6hq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fh6hq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fh6hq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"acadd2db-d3cd-417f-93e7-6a9e249c8d3d", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26", Pod:"coredns-674b8bbfcf-fh6hq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5146205f6ff", MAC:"ee:0c:34:80:68:fb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:53.313376 containerd[1624]: 2026-01-28 01:25:53.263 [INFO][4991] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26" Namespace="kube-system" Pod="coredns-674b8bbfcf-fh6hq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fh6hq-eth0" Jan 28 01:25:53.316643 systemd[1]: Started cri-containerd-ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70.scope - libcontainer container ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70. Jan 28 01:25:53.535000 audit: BPF prog-id=175 op=LOAD Jan 28 01:25:53.550509 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 28 01:25:53.550667 kernel: audit: type=1334 audit(1769563553.535:606): prog-id=175 op=LOAD Jan 28 01:25:53.561000 audit: BPF prog-id=176 op=LOAD Jan 28 01:25:53.584133 kernel: audit: type=1334 audit(1769563553.561:607): prog-id=176 op=LOAD Jan 28 01:25:53.561000 audit[5311]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=5277 pid=5311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:53.654603 kernel: audit: type=1300 audit(1769563553.561:607): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=5277 pid=5311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:53.654693 kernel: audit: type=1327 audit(1769563553.561:607): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666303333323065633563616135363338386537623961313039656666 Jan 28 01:25:53.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666303333323065633563616135363338386537623961313039656666 Jan 28 01:25:53.684973 kernel: audit: type=1334 audit(1769563553.561:608): prog-id=176 op=UNLOAD Jan 28 01:25:53.561000 audit: BPF prog-id=176 op=UNLOAD Jan 28 01:25:53.561000 audit[5311]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5277 pid=5311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:53.704859 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:25:53.737201 kernel: audit: type=1300 audit(1769563553.561:608): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5277 pid=5311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:53.737421 kernel: audit: type=1327 audit(1769563553.561:608): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666303333323065633563616135363338386537623961313039656666 Jan 28 01:25:53.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666303333323065633563616135363338386537623961313039656666 Jan 28 01:25:53.721870 systemd-networkd[1503]: cali345d9ea93a7: Gained IPv6LL Jan 28 01:25:53.786290 kernel: audit: type=1334 audit(1769563553.564:609): prog-id=177 op=LOAD Jan 28 01:25:53.788956 kernel: audit: type=1300 audit(1769563553.564:609): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=5277 pid=5311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:53.795277 kernel: audit: type=1327 audit(1769563553.564:609): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666303333323065633563616135363338386537623961313039656666 Jan 28 01:25:53.564000 audit: BPF prog-id=177 op=LOAD Jan 28 01:25:53.564000 audit[5311]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=5277 pid=5311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:53.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666303333323065633563616135363338386537623961313039656666 Jan 28 01:25:53.564000 audit: BPF prog-id=178 op=LOAD Jan 28 01:25:53.564000 audit[5311]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=5277 pid=5311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:53.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666303333323065633563616135363338386537623961313039656666 Jan 28 01:25:53.564000 audit: BPF prog-id=178 op=UNLOAD Jan 28 01:25:53.564000 audit[5311]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5277 pid=5311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:53.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666303333323065633563616135363338386537623961313039656666 Jan 28 01:25:53.566000 audit: BPF prog-id=177 op=UNLOAD Jan 28 01:25:53.566000 audit[5311]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5277 pid=5311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:53.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666303333323065633563616135363338386537623961313039656666 Jan 28 01:25:53.566000 audit: BPF prog-id=179 op=LOAD Jan 28 01:25:53.566000 audit[5311]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=5277 pid=5311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:53.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666303333323065633563616135363338386537623961313039656666 Jan 28 01:25:53.819357 systemd-networkd[1503]: cali01ab075b81c: Link UP Jan 28 01:25:53.848342 systemd[1]: Started cri-containerd-d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3.scope - libcontainer container d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3. Jan 28 01:25:53.854286 systemd-networkd[1503]: cali01ab075b81c: Gained carrier Jan 28 01:25:54.001822 containerd[1624]: time="2026-01-28T01:25:54.000741548Z" level=info msg="connecting to shim 0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26" address="unix:///run/containerd/s/e3729390d73de46936d231efb3ab4ef19528e17833deca0db84dabcd17b36336" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:25:54.023775 containerd[1624]: 2026-01-28 01:25:49.532 [INFO][5012] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:25:54.023775 containerd[1624]: 2026-01-28 01:25:49.848 [INFO][5012] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--kgc2v-eth0 csi-node-driver- calico-system 845c6024-31b8-4f74-be49-c76c18f222f2 883 0 2026-01-28 01:23:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-kgc2v eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali01ab075b81c [] [] }} ContainerID="e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd" Namespace="calico-system" Pod="csi-node-driver-kgc2v" WorkloadEndpoint="localhost-k8s-csi--node--driver--kgc2v-" Jan 28 01:25:54.023775 containerd[1624]: 2026-01-28 01:25:49.849 [INFO][5012] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd" Namespace="calico-system" Pod="csi-node-driver-kgc2v" WorkloadEndpoint="localhost-k8s-csi--node--driver--kgc2v-eth0" Jan 28 01:25:54.023775 containerd[1624]: 2026-01-28 01:25:51.281 [INFO][5108] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd" HandleID="k8s-pod-network.e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd" Workload="localhost-k8s-csi--node--driver--kgc2v-eth0" Jan 28 01:25:54.023775 containerd[1624]: 2026-01-28 01:25:51.281 [INFO][5108] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd" HandleID="k8s-pod-network.e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd" Workload="localhost-k8s-csi--node--driver--kgc2v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ce150), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-kgc2v", "timestamp":"2026-01-28 01:25:51.281483477 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:25:54.023775 containerd[1624]: 2026-01-28 01:25:51.281 [INFO][5108] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:25:54.023775 containerd[1624]: 2026-01-28 01:25:52.801 [INFO][5108] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:25:54.023775 containerd[1624]: 2026-01-28 01:25:52.806 [INFO][5108] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:25:54.023775 containerd[1624]: 2026-01-28 01:25:52.881 [INFO][5108] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd" host="localhost" Jan 28 01:25:54.023775 containerd[1624]: 2026-01-28 01:25:53.131 [INFO][5108] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:25:54.023775 containerd[1624]: 2026-01-28 01:25:53.232 [INFO][5108] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:25:54.023775 containerd[1624]: 2026-01-28 01:25:53.280 [INFO][5108] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:25:54.023775 containerd[1624]: 2026-01-28 01:25:53.334 [INFO][5108] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:25:54.023775 containerd[1624]: 2026-01-28 01:25:53.334 [INFO][5108] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd" host="localhost" Jan 28 01:25:54.023775 containerd[1624]: 2026-01-28 01:25:53.369 [INFO][5108] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd Jan 28 01:25:54.023775 containerd[1624]: 2026-01-28 01:25:53.418 [INFO][5108] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd" host="localhost" Jan 28 01:25:54.023775 containerd[1624]: 2026-01-28 01:25:53.486 [INFO][5108] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd" host="localhost" Jan 28 01:25:54.023775 containerd[1624]: 2026-01-28 01:25:53.486 [INFO][5108] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd" host="localhost" Jan 28 01:25:54.023775 containerd[1624]: 2026-01-28 01:25:53.486 [INFO][5108] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:25:54.023775 containerd[1624]: 2026-01-28 01:25:53.486 [INFO][5108] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd" HandleID="k8s-pod-network.e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd" Workload="localhost-k8s-csi--node--driver--kgc2v-eth0" Jan 28 01:25:54.025464 containerd[1624]: 2026-01-28 01:25:53.568 [INFO][5012] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd" Namespace="calico-system" Pod="csi-node-driver-kgc2v" WorkloadEndpoint="localhost-k8s-csi--node--driver--kgc2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kgc2v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"845c6024-31b8-4f74-be49-c76c18f222f2", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 23, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-kgc2v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali01ab075b81c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:54.025464 containerd[1624]: 2026-01-28 01:25:53.568 [INFO][5012] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd" Namespace="calico-system" Pod="csi-node-driver-kgc2v" WorkloadEndpoint="localhost-k8s-csi--node--driver--kgc2v-eth0" Jan 28 01:25:54.025464 containerd[1624]: 2026-01-28 01:25:53.568 [INFO][5012] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01ab075b81c ContainerID="e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd" Namespace="calico-system" Pod="csi-node-driver-kgc2v" WorkloadEndpoint="localhost-k8s-csi--node--driver--kgc2v-eth0" Jan 28 01:25:54.025464 containerd[1624]: 2026-01-28 01:25:53.866 [INFO][5012] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd" Namespace="calico-system" Pod="csi-node-driver-kgc2v" WorkloadEndpoint="localhost-k8s-csi--node--driver--kgc2v-eth0" Jan 28 01:25:54.025464 containerd[1624]: 2026-01-28 01:25:53.883 [INFO][5012] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd" Namespace="calico-system" Pod="csi-node-driver-kgc2v" WorkloadEndpoint="localhost-k8s-csi--node--driver--kgc2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kgc2v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"845c6024-31b8-4f74-be49-c76c18f222f2", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 23, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd", Pod:"csi-node-driver-kgc2v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali01ab075b81c", MAC:"76:a8:0e:a3:41:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:54.025464 containerd[1624]: 2026-01-28 01:25:53.994 [INFO][5012] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd" Namespace="calico-system" Pod="csi-node-driver-kgc2v" WorkloadEndpoint="localhost-k8s-csi--node--driver--kgc2v-eth0" Jan 28 01:25:54.089702 systemd-networkd[1503]: cali23f86e8ee32: Gained IPv6LL Jan 28 01:25:54.295000 audit: BPF prog-id=180 op=LOAD Jan 28 01:25:54.299000 audit: BPF prog-id=181 op=LOAD Jan 28 01:25:54.299000 audit[5345]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00019c238 a2=98 a3=0 items=0 ppid=5285 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:54.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436316238336330363964306361303430373234666464326465616662 Jan 28 01:25:54.299000 audit: BPF prog-id=181 op=UNLOAD Jan 28 01:25:54.299000 audit[5345]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5285 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:54.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436316238336330363964306361303430373234666464326465616662 Jan 28 01:25:54.299000 audit: BPF prog-id=182 op=LOAD Jan 28 01:25:54.299000 audit[5345]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00019c488 a2=98 a3=0 items=0 ppid=5285 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:54.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436316238336330363964306361303430373234666464326465616662 Jan 28 01:25:54.299000 audit: BPF prog-id=183 op=LOAD Jan 28 01:25:54.299000 audit[5345]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00019c218 a2=98 a3=0 items=0 ppid=5285 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:54.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436316238336330363964306361303430373234666464326465616662 Jan 28 01:25:54.299000 audit: BPF prog-id=183 op=UNLOAD Jan 28 01:25:54.299000 audit[5345]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5285 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:54.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436316238336330363964306361303430373234666464326465616662 Jan 28 01:25:54.299000 audit: BPF prog-id=182 op=UNLOAD Jan 28 01:25:54.299000 audit[5345]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5285 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:54.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436316238336330363964306361303430373234666464326465616662 Jan 28 01:25:54.299000 audit: BPF prog-id=184 op=LOAD Jan 28 01:25:54.299000 audit[5345]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00019c6e8 a2=98 a3=0 items=0 ppid=5285 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:54.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436316238336330363964306361303430373234666464326465616662 Jan 28 01:25:54.462139 systemd-networkd[1503]: cali5014f0452cd: Link UP Jan 28 01:25:54.472708 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:25:54.477971 systemd-networkd[1503]: cali5014f0452cd: Gained carrier Jan 28 01:25:54.506866 containerd[1624]: time="2026-01-28T01:25:54.506560996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59889c77b-9msjb,Uid:63a937f8-f218-45d1-87c6-b75ad5fcad55,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70\"" Jan 28 01:25:54.579975 containerd[1624]: time="2026-01-28T01:25:54.579923099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:25:54.607594 containerd[1624]: time="2026-01-28T01:25:54.601615680Z" level=info msg="connecting to shim e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd" address="unix:///run/containerd/s/e71e1d066cec7754daf1760e4bddf5331d2794c268dd008628ce32ab44072ae0" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:25:55.702476 systemd[1]: Started cri-containerd-0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26.scope - libcontainer container 0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26. Jan 28 01:25:55.832967 systemd-networkd[1503]: cali5146205f6ff: Gained IPv6LL Jan 28 01:25:55.904703 systemd-networkd[1503]: cali5014f0452cd: Gained IPv6LL Jan 28 01:25:55.972214 systemd-networkd[1503]: cali01ab075b81c: Gained IPv6LL Jan 28 01:25:56.573731 containerd[1624]: time="2026-01-28T01:25:56.553868014Z" level=error msg="get state for d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3" error="context deadline exceeded" Jan 28 01:25:56.724452 containerd[1624]: 2026-01-28 01:25:49.728 [INFO][5011] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:25:56.724452 containerd[1624]: 2026-01-28 01:25:50.200 [INFO][5011] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--g77nq-eth0 goldmane-666569f655- calico-system 8c19397c-299f-4305-bb7b-810de8e940fe 1086 0 2026-01-28 01:23:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-g77nq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5014f0452cd [] [] }} ContainerID="8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc" Namespace="calico-system" Pod="goldmane-666569f655-g77nq" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g77nq-" Jan 28 01:25:56.724452 containerd[1624]: 2026-01-28 01:25:50.200 [INFO][5011] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc" Namespace="calico-system" Pod="goldmane-666569f655-g77nq" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g77nq-eth0" Jan 28 01:25:56.724452 containerd[1624]: 2026-01-28 01:25:51.454 [INFO][5155] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc" HandleID="k8s-pod-network.8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc" Workload="localhost-k8s-goldmane--666569f655--g77nq-eth0" Jan 28 01:25:56.724452 containerd[1624]: 2026-01-28 01:25:51.454 [INFO][5155] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc" HandleID="k8s-pod-network.8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc" Workload="localhost-k8s-goldmane--666569f655--g77nq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000329220), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-g77nq", "timestamp":"2026-01-28 01:25:51.454181729 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:25:56.724452 containerd[1624]: 2026-01-28 01:25:51.454 [INFO][5155] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:25:56.724452 containerd[1624]: 2026-01-28 01:25:53.490 [INFO][5155] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:25:56.724452 containerd[1624]: 2026-01-28 01:25:53.491 [INFO][5155] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:25:56.724452 containerd[1624]: 2026-01-28 01:25:53.615 [INFO][5155] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc" host="localhost" Jan 28 01:25:56.724452 containerd[1624]: 2026-01-28 01:25:53.781 [INFO][5155] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:25:56.724452 containerd[1624]: 2026-01-28 01:25:53.846 [INFO][5155] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:25:56.724452 containerd[1624]: 2026-01-28 01:25:53.864 [INFO][5155] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:25:56.724452 containerd[1624]: 2026-01-28 01:25:53.878 [INFO][5155] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:25:56.724452 containerd[1624]: 2026-01-28 01:25:53.878 [INFO][5155] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc" host="localhost" Jan 28 01:25:56.724452 containerd[1624]: 2026-01-28 01:25:53.976 [INFO][5155] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc Jan 28 01:25:56.724452 containerd[1624]: 2026-01-28 01:25:54.202 [INFO][5155] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc" host="localhost" Jan 28 01:25:56.724452 containerd[1624]: 2026-01-28 01:25:54.333 [INFO][5155] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc" host="localhost" Jan 28 01:25:56.724452 containerd[1624]: 2026-01-28 01:25:54.334 [INFO][5155] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc" host="localhost" Jan 28 01:25:56.724452 containerd[1624]: 2026-01-28 01:25:54.345 [INFO][5155] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:25:56.724452 containerd[1624]: 2026-01-28 01:25:54.346 [INFO][5155] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc" HandleID="k8s-pod-network.8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc" Workload="localhost-k8s-goldmane--666569f655--g77nq-eth0" Jan 28 01:25:56.728283 containerd[1624]: 2026-01-28 01:25:54.372 [INFO][5011] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc" Namespace="calico-system" Pod="goldmane-666569f655-g77nq" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g77nq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--g77nq-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8c19397c-299f-4305-bb7b-810de8e940fe", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-g77nq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5014f0452cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:56.728283 containerd[1624]: 2026-01-28 01:25:54.375 [INFO][5011] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc" Namespace="calico-system" Pod="goldmane-666569f655-g77nq" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g77nq-eth0" Jan 28 01:25:56.728283 containerd[1624]: 2026-01-28 01:25:54.375 [INFO][5011] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5014f0452cd ContainerID="8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc" Namespace="calico-system" Pod="goldmane-666569f655-g77nq" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g77nq-eth0" Jan 28 01:25:56.728283 containerd[1624]: 2026-01-28 01:25:54.513 [INFO][5011] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc" Namespace="calico-system" Pod="goldmane-666569f655-g77nq" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g77nq-eth0" Jan 28 01:25:56.728283 containerd[1624]: 2026-01-28 01:25:54.515 [INFO][5011] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc" Namespace="calico-system" Pod="goldmane-666569f655-g77nq" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g77nq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--g77nq-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8c19397c-299f-4305-bb7b-810de8e940fe", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc", Pod:"goldmane-666569f655-g77nq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5014f0452cd", MAC:"c6:e8:46:59:56:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:56.728283 containerd[1624]: 2026-01-28 01:25:56.016 [INFO][5011] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc" Namespace="calico-system" Pod="goldmane-666569f655-g77nq" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g77nq-eth0" Jan 28 01:25:56.734310 containerd[1624]: time="2026-01-28T01:25:56.714690792Z" level=warning msg="unknown status" status=0 Jan 28 01:26:01.591162 containerd[1624]: time="2026-01-28T01:26:01.589363934Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:01.680650 containerd[1624]: time="2026-01-28T01:26:01.679205409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:26:01.681575 containerd[1624]: time="2026-01-28T01:26:01.681252330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:01.693898 kubelet[2995]: E0128 01:26:01.689285 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:26:01.693898 kubelet[2995]: E0128 01:26:01.689347 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:26:01.694704 kubelet[2995]: E0128 01:26:01.691685 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwzkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59889c77b-9msjb_calico-apiserver(63a937f8-f218-45d1-87c6-b75ad5fcad55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:01.709691 kubelet[2995]: E0128 01:26:01.694992 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:26:01.848782 kubelet[2995]: E0128 01:26:01.846953 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.783s" Jan 28 01:26:01.902481 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 28 01:26:01.902615 kernel: audit: type=1334 audit(1769563561.874:622): prog-id=185 op=LOAD Jan 28 01:26:01.874000 audit: BPF prog-id=185 op=LOAD Jan 28 01:26:01.917000 audit: BPF prog-id=186 op=LOAD Jan 28 01:26:01.963702 kernel: audit: type=1334 audit(1769563561.917:623): prog-id=186 op=LOAD Jan 28 01:26:01.963865 kernel: audit: type=1300 audit(1769563561.917:623): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5417 pid=5441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:01.917000 audit[5441]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5417 pid=5441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:02.032472 kernel: audit: type=1327 audit(1769563561.917:623): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066373139623966636239653766386338663266623035376230343737 Jan 28 01:26:01.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066373139623966636239653766386338663266623035376230343737 Jan 28 01:26:02.028846 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:26:02.104568 kernel: audit: type=1334 audit(1769563561.917:624): prog-id=186 op=UNLOAD Jan 28 01:26:02.385544 kernel: audit: type=1300 audit(1769563561.917:624): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5417 pid=5441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:02.481583 kernel: audit: type=1327 audit(1769563561.917:624): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066373139623966636239653766386338663266623035376230343737 Jan 28 01:26:02.481641 kernel: audit: type=1334 audit(1769563561.917:625): prog-id=187 op=LOAD Jan 28 01:26:02.481682 kernel: audit: type=1300 audit(1769563561.917:625): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5417 pid=5441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:02.481720 kernel: audit: type=1327 audit(1769563561.917:625): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066373139623966636239653766386338663266623035376230343737 Jan 28 01:26:01.917000 audit: BPF prog-id=186 op=UNLOAD Jan 28 01:26:01.917000 audit[5441]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5417 pid=5441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:01.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066373139623966636239653766386338663266623035376230343737 Jan 28 01:26:01.917000 audit: BPF prog-id=187 op=LOAD Jan 28 01:26:01.917000 audit[5441]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5417 pid=5441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:01.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066373139623966636239653766386338663266623035376230343737 Jan 28 01:26:02.430695 systemd[1]: Started cri-containerd-e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd.scope - libcontainer container e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd. Jan 28 01:26:01.917000 audit: BPF prog-id=188 op=LOAD Jan 28 01:26:01.917000 audit[5441]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5417 pid=5441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:01.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066373139623966636239653766386338663266623035376230343737 Jan 28 01:26:01.917000 audit: BPF prog-id=188 op=UNLOAD Jan 28 01:26:01.917000 audit[5441]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5417 pid=5441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:01.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066373139623966636239653766386338663266623035376230343737 Jan 28 01:26:01.917000 audit: BPF prog-id=187 op=UNLOAD Jan 28 01:26:01.917000 audit[5441]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5417 pid=5441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:01.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066373139623966636239653766386338663266623035376230343737 Jan 28 01:26:01.917000 audit: BPF prog-id=189 op=LOAD Jan 28 01:26:01.917000 audit[5441]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5417 pid=5441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:01.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066373139623966636239653766386338663266623035376230343737 Jan 28 01:26:02.629864 systemd-networkd[1503]: cali74a20f8e122: Link UP Jan 28 01:26:02.728271 kubelet[2995]: E0128 01:26:02.727275 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:26:02.766931 systemd-networkd[1503]: cali74a20f8e122: Gained carrier Jan 28 01:26:03.014585 containerd[1624]: time="2026-01-28T01:26:03.013360846Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 28 01:26:03.225690 containerd[1624]: time="2026-01-28T01:26:03.225510612Z" level=info msg="connecting to shim 8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc" address="unix:///run/containerd/s/d436a5c11eec724f306322869c9b3e833c75ee41ba298e9fd72db825bc11cc92" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:26:03.521000 audit: BPF prog-id=190 op=LOAD Jan 28 01:26:03.676000 audit[5550]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=5550 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:03.676000 audit[5550]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffed22a6f90 a2=0 a3=7ffed22a6f7c items=0 ppid=3100 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:03.676000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:03.690000 audit[5550]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=5550 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:03.690000 audit[5550]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffed22a6f90 a2=0 a3=0 items=0 ppid=3100 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:03.690000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:03.733000 audit: BPF prog-id=191 op=LOAD Jan 28 01:26:03.733000 audit[5496]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5473 pid=5496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:03.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538313932626162343331626266346239633264643164646331336631 Jan 28 01:26:03.733000 audit: BPF prog-id=191 op=UNLOAD Jan 28 01:26:03.733000 audit[5496]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5473 pid=5496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:03.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538313932626162343331626266346239633264643164646331336631 Jan 28 01:26:03.793000 audit: BPF prog-id=192 op=LOAD Jan 28 01:26:03.793000 audit[5496]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5473 pid=5496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:03.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538313932626162343331626266346239633264643164646331336631 Jan 28 01:26:03.799000 audit: BPF prog-id=193 op=LOAD Jan 28 01:26:03.859462 kubelet[2995]: E0128 01:26:03.857565 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:26:03.799000 audit[5496]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5473 pid=5496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:03.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538313932626162343331626266346239633264643164646331336631 Jan 28 01:26:03.861000 audit: BPF prog-id=193 op=UNLOAD Jan 28 01:26:03.861000 audit[5496]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5473 pid=5496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:03.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538313932626162343331626266346239633264643164646331336631 Jan 28 01:26:03.861000 audit: BPF prog-id=192 op=UNLOAD Jan 28 01:26:03.861000 audit[5496]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5473 pid=5496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:03.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538313932626162343331626266346239633264643164646331336631 Jan 28 01:26:03.861000 audit: BPF prog-id=194 op=LOAD Jan 28 01:26:03.861000 audit[5496]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5473 pid=5496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:03.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538313932626162343331626266346239633264643164646331336631 Jan 28 01:26:03.874583 containerd[1624]: 2026-01-28 01:25:50.165 [INFO][5071] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:26:03.874583 containerd[1624]: 2026-01-28 01:25:50.378 [INFO][5071] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--qs5tp-eth0 coredns-674b8bbfcf- kube-system 28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9 1091 0 2026-01-28 01:22:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-qs5tp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali74a20f8e122 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f" Namespace="kube-system" Pod="coredns-674b8bbfcf-qs5tp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qs5tp-" Jan 28 01:26:03.874583 containerd[1624]: 2026-01-28 01:25:50.378 [INFO][5071] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f" Namespace="kube-system" Pod="coredns-674b8bbfcf-qs5tp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qs5tp-eth0" Jan 28 01:26:03.874583 containerd[1624]: 2026-01-28 01:25:51.386 [INFO][5166] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f" HandleID="k8s-pod-network.c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f" Workload="localhost-k8s-coredns--674b8bbfcf--qs5tp-eth0" Jan 28 01:26:03.874583 containerd[1624]: 2026-01-28 01:25:51.404 [INFO][5166] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f" HandleID="k8s-pod-network.c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f" Workload="localhost-k8s-coredns--674b8bbfcf--qs5tp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000323c10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-qs5tp", "timestamp":"2026-01-28 01:25:51.386573924 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:26:03.874583 containerd[1624]: 2026-01-28 01:25:51.404 [INFO][5166] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:26:03.874583 containerd[1624]: 2026-01-28 01:25:54.347 [INFO][5166] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:26:03.874583 containerd[1624]: 2026-01-28 01:25:54.347 [INFO][5166] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:26:03.874583 containerd[1624]: 2026-01-28 01:25:54.460 [INFO][5166] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f" host="localhost" Jan 28 01:26:03.874583 containerd[1624]: 2026-01-28 01:25:54.577 [INFO][5166] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:26:03.874583 containerd[1624]: 2026-01-28 01:26:01.648 [INFO][5166] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:26:03.874583 containerd[1624]: 2026-01-28 01:26:01.667 [INFO][5166] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:26:03.874583 containerd[1624]: 2026-01-28 01:26:01.697 [INFO][5166] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:26:03.874583 containerd[1624]: 2026-01-28 01:26:01.697 [INFO][5166] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f" host="localhost" Jan 28 01:26:03.874583 containerd[1624]: 2026-01-28 01:26:01.728 [INFO][5166] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f Jan 28 01:26:03.874583 containerd[1624]: 2026-01-28 01:26:01.803 [INFO][5166] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f" host="localhost" Jan 28 01:26:03.874583 containerd[1624]: 2026-01-28 01:26:01.904 [INFO][5166] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f" host="localhost" Jan 28 01:26:03.874583 containerd[1624]: 2026-01-28 01:26:01.904 [INFO][5166] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f" host="localhost" Jan 28 01:26:03.874583 containerd[1624]: 2026-01-28 01:26:01.904 [INFO][5166] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:26:03.874583 containerd[1624]: 2026-01-28 01:26:01.904 [INFO][5166] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f" HandleID="k8s-pod-network.c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f" Workload="localhost-k8s-coredns--674b8bbfcf--qs5tp-eth0" Jan 28 01:26:03.875713 containerd[1624]: 2026-01-28 01:26:02.059 [INFO][5071] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f" Namespace="kube-system" Pod="coredns-674b8bbfcf-qs5tp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qs5tp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qs5tp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-qs5tp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali74a20f8e122", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:26:03.875713 containerd[1624]: 2026-01-28 01:26:02.060 [INFO][5071] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f" Namespace="kube-system" Pod="coredns-674b8bbfcf-qs5tp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qs5tp-eth0" Jan 28 01:26:03.875713 containerd[1624]: 2026-01-28 01:26:02.060 [INFO][5071] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74a20f8e122 ContainerID="c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f" Namespace="kube-system" Pod="coredns-674b8bbfcf-qs5tp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qs5tp-eth0" Jan 28 01:26:03.875713 containerd[1624]: 2026-01-28 01:26:02.803 [INFO][5071] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f" Namespace="kube-system" Pod="coredns-674b8bbfcf-qs5tp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qs5tp-eth0" Jan 28 01:26:03.875713 containerd[1624]: 2026-01-28 01:26:02.814 [INFO][5071] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f" Namespace="kube-system" Pod="coredns-674b8bbfcf-qs5tp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qs5tp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qs5tp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f", Pod:"coredns-674b8bbfcf-qs5tp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali74a20f8e122", MAC:"5e:6d:4b:b5:d6:ce", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:26:03.875713 containerd[1624]: 2026-01-28 01:26:03.505 [INFO][5071] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f" Namespace="kube-system" Pod="coredns-674b8bbfcf-qs5tp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qs5tp-eth0" Jan 28 01:26:04.092435 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:26:04.240539 containerd[1624]: time="2026-01-28T01:26:04.229992969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fh6hq,Uid:acadd2db-d3cd-417f-93e7-6a9e249c8d3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26\"" Jan 28 01:26:04.270487 kubelet[2995]: E0128 01:26:04.265571 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:04.278349 systemd-networkd[1503]: cali74a20f8e122: Gained IPv6LL Jan 28 01:26:04.327342 systemd[1]: Started cri-containerd-8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc.scope - libcontainer container 8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc. Jan 28 01:26:04.408483 containerd[1624]: time="2026-01-28T01:26:04.407282183Z" level=info msg="CreateContainer within sandbox \"0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:26:04.561943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1753556408.mount: Deactivated successfully. Jan 28 01:26:04.618708 containerd[1624]: time="2026-01-28T01:26:04.618651334Z" level=info msg="connecting to shim c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f" address="unix:///run/containerd/s/bb5cf5b70e5cfd9a7f81a4a5594c4108bb8eb0aca57c4625abeb773ff32873c0" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:26:04.644491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1341891339.mount: Deactivated successfully. Jan 28 01:26:04.890243 containerd[1624]: time="2026-01-28T01:26:04.881971887Z" level=info msg="Container bbbd37ceea0cca7a69e61c68be3bbb761ff155afafe4bd14651d580d9aa40b60: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:26:04.895262 containerd[1624]: time="2026-01-28T01:26:04.895224307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59889c77b-c5nrl,Uid:b1cfce8a-501a-4088-a990-12172f5320b3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3\"" Jan 28 01:26:05.009486 containerd[1624]: time="2026-01-28T01:26:05.007951094Z" level=info msg="CreateContainer within sandbox \"0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bbbd37ceea0cca7a69e61c68be3bbb761ff155afafe4bd14651d580d9aa40b60\"" Jan 28 01:26:05.042375 kubelet[2995]: E0128 01:26:05.034768 2995 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod845c6024_31b8_4f74_be49_c76c18f222f2.slice/cri-containerd-e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd.scope\": RecentStats: unable to find data in memory cache]" Jan 28 01:26:05.074514 containerd[1624]: time="2026-01-28T01:26:05.037680632Z" level=info msg="StartContainer for \"bbbd37ceea0cca7a69e61c68be3bbb761ff155afafe4bd14651d580d9aa40b60\"" Jan 28 01:26:05.094599 containerd[1624]: time="2026-01-28T01:26:05.094551066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:26:05.205169 containerd[1624]: time="2026-01-28T01:26:05.194373569Z" level=info msg="connecting to shim bbbd37ceea0cca7a69e61c68be3bbb761ff155afafe4bd14651d580d9aa40b60" address="unix:///run/containerd/s/e3729390d73de46936d231efb3ab4ef19528e17833deca0db84dabcd17b36336" protocol=ttrpc version=3 Jan 28 01:26:05.298260 systemd-networkd[1503]: cali3396ea2764a: Link UP Jan 28 01:26:05.313564 systemd-networkd[1503]: cali3396ea2764a: Gained carrier Jan 28 01:26:05.490567 containerd[1624]: time="2026-01-28T01:26:05.490441989Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:05.510370 containerd[1624]: time="2026-01-28T01:26:05.505506454Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:26:05.510370 containerd[1624]: time="2026-01-28T01:26:05.505638740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:05.511265 kubelet[2995]: E0128 01:26:05.510960 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:26:05.512169 kubelet[2995]: E0128 01:26:05.511401 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:26:05.514430 kubelet[2995]: E0128 01:26:05.512556 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prlsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59889c77b-c5nrl_calico-apiserver(b1cfce8a-501a-4088-a990-12172f5320b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:05.513773 systemd[1]: Started cri-containerd-c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f.scope - libcontainer container c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f. Jan 28 01:26:05.516520 kubelet[2995]: E0128 01:26:05.514428 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:26:05.545854 containerd[1624]: 2026-01-28 01:25:49.657 [INFO][5009] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:26:05.545854 containerd[1624]: 2026-01-28 01:25:50.128 [INFO][5009] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6d85994946--hw4kg-eth0 calico-kube-controllers-6d85994946- calico-system 2c2d5c47-2f4c-4dc2-af4d-d250680defb0 1083 0 2026-01-28 01:23:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d85994946 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6d85994946-hw4kg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3396ea2764a [] [] }} ContainerID="f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587" Namespace="calico-system" Pod="calico-kube-controllers-6d85994946-hw4kg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d85994946--hw4kg-" Jan 28 01:26:05.545854 containerd[1624]: 2026-01-28 01:25:50.128 [INFO][5009] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587" Namespace="calico-system" Pod="calico-kube-controllers-6d85994946-hw4kg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d85994946--hw4kg-eth0" Jan 28 01:26:05.545854 containerd[1624]: 2026-01-28 01:25:51.471 [INFO][5154] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587" HandleID="k8s-pod-network.f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587" Workload="localhost-k8s-calico--kube--controllers--6d85994946--hw4kg-eth0" Jan 28 01:26:05.545854 containerd[1624]: 2026-01-28 01:25:51.474 [INFO][5154] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587" HandleID="k8s-pod-network.f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587" Workload="localhost-k8s-calico--kube--controllers--6d85994946--hw4kg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f3f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6d85994946-hw4kg", "timestamp":"2026-01-28 01:25:51.471421983 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:26:05.545854 containerd[1624]: 2026-01-28 01:25:51.474 [INFO][5154] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:26:05.545854 containerd[1624]: 2026-01-28 01:26:01.904 [INFO][5154] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:26:05.545854 containerd[1624]: 2026-01-28 01:26:01.905 [INFO][5154] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:26:05.545854 containerd[1624]: 2026-01-28 01:26:02.503 [INFO][5154] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587" host="localhost" Jan 28 01:26:05.545854 containerd[1624]: 2026-01-28 01:26:02.621 [INFO][5154] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:26:05.545854 containerd[1624]: 2026-01-28 01:26:03.039 [INFO][5154] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:26:05.545854 containerd[1624]: 2026-01-28 01:26:03.298 [INFO][5154] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:26:05.545854 containerd[1624]: 2026-01-28 01:26:03.507 [INFO][5154] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:26:05.545854 containerd[1624]: 2026-01-28 01:26:03.732 [INFO][5154] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587" host="localhost" Jan 28 01:26:05.545854 containerd[1624]: 2026-01-28 01:26:04.065 [INFO][5154] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587 Jan 28 01:26:05.545854 containerd[1624]: 2026-01-28 01:26:04.358 [INFO][5154] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587" host="localhost" Jan 28 01:26:05.545854 containerd[1624]: 2026-01-28 01:26:04.490 [INFO][5154] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587" host="localhost" Jan 28 01:26:05.545854 containerd[1624]: 2026-01-28 01:26:04.492 [INFO][5154] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587" host="localhost" Jan 28 01:26:05.545854 containerd[1624]: 2026-01-28 01:26:04.492 [INFO][5154] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:26:05.545854 containerd[1624]: 2026-01-28 01:26:04.492 [INFO][5154] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587" HandleID="k8s-pod-network.f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587" Workload="localhost-k8s-calico--kube--controllers--6d85994946--hw4kg-eth0" Jan 28 01:26:05.547271 containerd[1624]: 2026-01-28 01:26:05.131 [INFO][5009] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587" Namespace="calico-system" Pod="calico-kube-controllers-6d85994946-hw4kg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d85994946--hw4kg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d85994946--hw4kg-eth0", GenerateName:"calico-kube-controllers-6d85994946-", Namespace:"calico-system", SelfLink:"", UID:"2c2d5c47-2f4c-4dc2-af4d-d250680defb0", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d85994946", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6d85994946-hw4kg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3396ea2764a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:26:05.547271 containerd[1624]: 2026-01-28 01:26:05.132 [INFO][5009] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587" Namespace="calico-system" Pod="calico-kube-controllers-6d85994946-hw4kg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d85994946--hw4kg-eth0" Jan 28 01:26:05.547271 containerd[1624]: 2026-01-28 01:26:05.132 [INFO][5009] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3396ea2764a ContainerID="f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587" Namespace="calico-system" Pod="calico-kube-controllers-6d85994946-hw4kg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d85994946--hw4kg-eth0" Jan 28 01:26:05.547271 containerd[1624]: 2026-01-28 01:26:05.315 [INFO][5009] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587" Namespace="calico-system" Pod="calico-kube-controllers-6d85994946-hw4kg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d85994946--hw4kg-eth0" Jan 28 01:26:05.547271 containerd[1624]: 2026-01-28 01:26:05.321 [INFO][5009] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587" Namespace="calico-system" Pod="calico-kube-controllers-6d85994946-hw4kg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d85994946--hw4kg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d85994946--hw4kg-eth0", GenerateName:"calico-kube-controllers-6d85994946-", Namespace:"calico-system", SelfLink:"", UID:"2c2d5c47-2f4c-4dc2-af4d-d250680defb0", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d85994946", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587", Pod:"calico-kube-controllers-6d85994946-hw4kg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3396ea2764a", MAC:"f6:42:64:ed:99:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:26:05.547271 containerd[1624]: 2026-01-28 01:26:05.476 [INFO][5009] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587" Namespace="calico-system" Pod="calico-kube-controllers-6d85994946-hw4kg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d85994946--hw4kg-eth0" Jan 28 01:26:05.562415 containerd[1624]: time="2026-01-28T01:26:05.562370941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kgc2v,Uid:845c6024-31b8-4f74-be49-c76c18f222f2,Namespace:calico-system,Attempt:0,} returns sandbox id \"e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd\"" Jan 28 01:26:05.596000 audit: BPF prog-id=195 op=LOAD Jan 28 01:26:05.599000 audit: BPF prog-id=196 op=LOAD Jan 28 01:26:05.599000 audit[5553]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00020c238 a2=98 a3=0 items=0 ppid=5538 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:05.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837373161316661363938666566303863383562303533366466306537 Jan 28 01:26:05.602000 audit: BPF prog-id=196 op=UNLOAD Jan 28 01:26:05.602000 audit[5553]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5538 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:05.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837373161316661363938666566303863383562303533366466306537 Jan 28 01:26:05.603000 audit: BPF prog-id=197 op=LOAD Jan 28 01:26:05.603000 audit[5553]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00020c488 a2=98 a3=0 items=0 ppid=5538 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:05.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837373161316661363938666566303863383562303533366466306537 Jan 28 01:26:05.603000 audit: BPF prog-id=198 op=LOAD Jan 28 01:26:05.603000 audit[5553]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00020c218 a2=98 a3=0 items=0 ppid=5538 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:05.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837373161316661363938666566303863383562303533366466306537 Jan 28 01:26:05.603000 audit: BPF prog-id=198 op=UNLOAD Jan 28 01:26:05.603000 audit[5553]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5538 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:05.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837373161316661363938666566303863383562303533366466306537 Jan 28 01:26:05.603000 audit: BPF prog-id=197 op=UNLOAD Jan 28 01:26:05.603000 audit[5553]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5538 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:05.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837373161316661363938666566303863383562303533366466306537 Jan 28 01:26:05.603000 audit: BPF prog-id=199 op=LOAD Jan 28 01:26:05.603000 audit[5553]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00020c6e8 a2=98 a3=0 items=0 ppid=5538 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:05.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837373161316661363938666566303863383562303533366466306537 Jan 28 01:26:05.622363 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:26:05.638806 containerd[1624]: time="2026-01-28T01:26:05.638573869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:26:05.743000 audit: BPF prog-id=200 op=LOAD Jan 28 01:26:05.749000 audit: BPF prog-id=201 op=LOAD Jan 28 01:26:05.749000 audit[5604]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5589 pid=5604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:05.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338656538313331343535313537656534303936616134303231353033 Jan 28 01:26:05.749000 audit: BPF prog-id=201 op=UNLOAD Jan 28 01:26:05.749000 audit[5604]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5589 pid=5604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:05.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338656538313331343535313537656534303936616134303231353033 Jan 28 01:26:05.766000 audit: BPF prog-id=202 op=LOAD Jan 28 01:26:05.766000 audit[5604]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5589 pid=5604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:05.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338656538313331343535313537656534303936616134303231353033 Jan 28 01:26:05.766000 audit: BPF prog-id=203 op=LOAD Jan 28 01:26:05.766000 audit[5604]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5589 pid=5604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:05.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338656538313331343535313537656534303936616134303231353033 Jan 28 01:26:05.766000 audit: BPF prog-id=203 op=UNLOAD Jan 28 01:26:05.766000 audit[5604]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5589 pid=5604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:05.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338656538313331343535313537656534303936616134303231353033 Jan 28 01:26:05.766000 audit: BPF prog-id=202 op=UNLOAD Jan 28 01:26:05.766000 audit[5604]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5589 pid=5604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:05.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338656538313331343535313537656534303936616134303231353033 Jan 28 01:26:05.766000 audit: BPF prog-id=204 op=LOAD Jan 28 01:26:05.766000 audit[5604]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5589 pid=5604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:05.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338656538313331343535313537656534303936616134303231353033 Jan 28 01:26:05.782975 systemd[1]: Started cri-containerd-bbbd37ceea0cca7a69e61c68be3bbb761ff155afafe4bd14651d580d9aa40b60.scope - libcontainer container bbbd37ceea0cca7a69e61c68be3bbb761ff155afafe4bd14651d580d9aa40b60. Jan 28 01:26:05.881885 containerd[1624]: time="2026-01-28T01:26:05.880672353Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:05.883811 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:26:05.977321 containerd[1624]: time="2026-01-28T01:26:05.975326472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:26:05.977321 containerd[1624]: time="2026-01-28T01:26:05.975667167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:05.977946 kubelet[2995]: E0128 01:26:05.977872 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:26:05.979628 kubelet[2995]: E0128 01:26:05.977984 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:26:05.979628 kubelet[2995]: E0128 01:26:05.978860 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4l6c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kgc2v_calico-system(845c6024-31b8-4f74-be49-c76c18f222f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:05.985629 containerd[1624]: time="2026-01-28T01:26:05.982728637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:26:06.090000 audit: BPF prog-id=205 op=LOAD Jan 28 01:26:06.093000 audit: BPF prog-id=206 op=LOAD Jan 28 01:26:06.093000 audit[5621]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000240238 a2=98 a3=0 items=0 ppid=5417 pid=5621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:06.093000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262626433376365656130636361376136396536316336386265336262 Jan 28 01:26:06.093000 audit: BPF prog-id=206 op=UNLOAD Jan 28 01:26:06.093000 audit[5621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5417 pid=5621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:06.093000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262626433376365656130636361376136396536316336386265336262 Jan 28 01:26:06.093000 audit: BPF prog-id=207 op=LOAD Jan 28 01:26:06.093000 audit[5621]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000240488 a2=98 a3=0 items=0 ppid=5417 pid=5621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:06.093000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262626433376365656130636361376136396536316336386265336262 Jan 28 01:26:06.093000 audit: BPF prog-id=208 op=LOAD Jan 28 01:26:06.093000 audit[5621]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000240218 a2=98 a3=0 items=0 ppid=5417 pid=5621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:06.093000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262626433376365656130636361376136396536316336386265336262 Jan 28 01:26:06.093000 audit: BPF prog-id=208 op=UNLOAD Jan 28 01:26:06.093000 audit[5621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5417 pid=5621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:06.093000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262626433376365656130636361376136396536316336386265336262 Jan 28 01:26:06.093000 audit: BPF prog-id=207 op=UNLOAD Jan 28 01:26:06.093000 audit[5621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5417 pid=5621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:06.093000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262626433376365656130636361376136396536316336386265336262 Jan 28 01:26:06.093000 audit: BPF prog-id=209 op=LOAD Jan 28 01:26:06.093000 audit[5621]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002406e8 a2=98 a3=0 items=0 ppid=5417 pid=5621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:06.093000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262626433376365656130636361376136396536316336386265336262 Jan 28 01:26:06.109885 containerd[1624]: time="2026-01-28T01:26:06.099373310Z" level=info msg="connecting to shim f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587" address="unix:///run/containerd/s/505d56297f74dba3404caecdfcd6fb302dae89ad581dd9c4e53ba9c0942ad1a3" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:26:06.254286 kubelet[2995]: E0128 01:26:06.231673 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:26:06.259525 containerd[1624]: time="2026-01-28T01:26:06.257859961Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:06.498244 containerd[1624]: time="2026-01-28T01:26:06.497503706Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:26:06.498244 containerd[1624]: time="2026-01-28T01:26:06.497622939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:06.499414 kubelet[2995]: E0128 01:26:06.499364 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:26:06.500771 kubelet[2995]: E0128 01:26:06.499551 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:26:06.500771 kubelet[2995]: E0128 01:26:06.499713 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4l6c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kgc2v_calico-system(845c6024-31b8-4f74-be49-c76c18f222f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:06.520568 kubelet[2995]: E0128 01:26:06.506457 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:26:06.523781 systemd-networkd[1503]: cali3396ea2764a: Gained IPv6LL Jan 28 01:26:06.526278 systemd-networkd[1503]: cali433d940df88: Link UP Jan 28 01:26:06.586940 systemd-networkd[1503]: cali433d940df88: Gained carrier Jan 28 01:26:06.869399 containerd[1624]: time="2026-01-28T01:26:06.866688297Z" level=info msg="StartContainer for \"bbbd37ceea0cca7a69e61c68be3bbb761ff155afafe4bd14651d580d9aa40b60\" returns successfully" Jan 28 01:26:06.929495 containerd[1624]: time="2026-01-28T01:26:06.929447288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g77nq,Uid:8c19397c-299f-4305-bb7b-810de8e940fe,Namespace:calico-system,Attempt:0,} returns sandbox id \"8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc\"" Jan 28 01:26:06.933000 audit[5715]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=5715 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:06.943414 containerd[1624]: time="2026-01-28T01:26:06.941864824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:26:06.985318 kernel: kauditd_printk_skb: 106 callbacks suppressed Jan 28 01:26:06.985468 kernel: audit: type=1325 audit(1769563566.933:664): table=filter:125 family=2 entries=20 op=nft_register_rule pid=5715 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:06.985831 kubelet[2995]: E0128 01:26:06.970615 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:06.985930 containerd[1624]: 2026-01-28 01:25:50.592 [INFO][5100] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:26:06.985930 containerd[1624]: 2026-01-28 01:25:50.840 [INFO][5100] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--b57dd5bd--wgdx5-eth0 whisker-b57dd5bd- calico-system fe814487-9d96-47c2-af16-f4ab9eb63844 1290 0 2026-01-28 01:25:49 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:b57dd5bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-b57dd5bd-wgdx5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali433d940df88 [] [] }} ContainerID="18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb" Namespace="calico-system" Pod="whisker-b57dd5bd-wgdx5" WorkloadEndpoint="localhost-k8s-whisker--b57dd5bd--wgdx5-" Jan 28 01:26:06.985930 containerd[1624]: 2026-01-28 01:25:50.864 [INFO][5100] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb" Namespace="calico-system" Pod="whisker-b57dd5bd-wgdx5" WorkloadEndpoint="localhost-k8s-whisker--b57dd5bd--wgdx5-eth0" Jan 28 01:26:06.985930 containerd[1624]: 2026-01-28 01:25:51.543 [INFO][5189] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb" HandleID="k8s-pod-network.18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb" Workload="localhost-k8s-whisker--b57dd5bd--wgdx5-eth0" Jan 28 01:26:06.985930 containerd[1624]: 2026-01-28 01:25:51.544 [INFO][5189] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb" HandleID="k8s-pod-network.18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb" Workload="localhost-k8s-whisker--b57dd5bd--wgdx5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034c310), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-b57dd5bd-wgdx5", "timestamp":"2026-01-28 01:25:51.543787543 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:26:06.985930 containerd[1624]: 2026-01-28 01:25:51.544 [INFO][5189] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:26:06.985930 containerd[1624]: 2026-01-28 01:26:04.492 [INFO][5189] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:26:06.985930 containerd[1624]: 2026-01-28 01:26:04.496 [INFO][5189] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:26:06.985930 containerd[1624]: 2026-01-28 01:26:04.915 [INFO][5189] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb" host="localhost" Jan 28 01:26:06.985930 containerd[1624]: 2026-01-28 01:26:05.163 [INFO][5189] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:26:06.985930 containerd[1624]: 2026-01-28 01:26:05.456 [INFO][5189] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:26:06.985930 containerd[1624]: 2026-01-28 01:26:05.482 [INFO][5189] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:26:06.985930 containerd[1624]: 2026-01-28 01:26:05.503 [INFO][5189] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:26:06.985930 containerd[1624]: 2026-01-28 01:26:05.503 [INFO][5189] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb" host="localhost" Jan 28 01:26:06.985930 containerd[1624]: 2026-01-28 01:26:05.549 [INFO][5189] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb Jan 28 01:26:06.985930 containerd[1624]: 2026-01-28 01:26:05.728 [INFO][5189] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb" host="localhost" Jan 28 01:26:06.985930 containerd[1624]: 2026-01-28 01:26:05.962 [INFO][5189] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb" host="localhost" Jan 28 01:26:06.985930 containerd[1624]: 2026-01-28 01:26:05.972 [INFO][5189] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb" host="localhost" Jan 28 01:26:06.985930 containerd[1624]: 2026-01-28 01:26:05.972 [INFO][5189] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:26:06.985930 containerd[1624]: 2026-01-28 01:26:05.974 [INFO][5189] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb" HandleID="k8s-pod-network.18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb" Workload="localhost-k8s-whisker--b57dd5bd--wgdx5-eth0" Jan 28 01:26:06.986991 containerd[1624]: 2026-01-28 01:26:06.159 [INFO][5100] cni-plugin/k8s.go 418: Populated endpoint ContainerID="18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb" Namespace="calico-system" Pod="whisker-b57dd5bd-wgdx5" WorkloadEndpoint="localhost-k8s-whisker--b57dd5bd--wgdx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--b57dd5bd--wgdx5-eth0", GenerateName:"whisker-b57dd5bd-", Namespace:"calico-system", SelfLink:"", UID:"fe814487-9d96-47c2-af16-f4ab9eb63844", ResourceVersion:"1290", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b57dd5bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-b57dd5bd-wgdx5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali433d940df88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:26:06.986991 containerd[1624]: 2026-01-28 01:26:06.182 [INFO][5100] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb" Namespace="calico-system" Pod="whisker-b57dd5bd-wgdx5" WorkloadEndpoint="localhost-k8s-whisker--b57dd5bd--wgdx5-eth0" Jan 28 01:26:06.986991 containerd[1624]: 2026-01-28 01:26:06.211 [INFO][5100] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali433d940df88 ContainerID="18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb" Namespace="calico-system" Pod="whisker-b57dd5bd-wgdx5" WorkloadEndpoint="localhost-k8s-whisker--b57dd5bd--wgdx5-eth0" Jan 28 01:26:06.986991 containerd[1624]: 2026-01-28 01:26:06.604 [INFO][5100] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb" Namespace="calico-system" Pod="whisker-b57dd5bd-wgdx5" WorkloadEndpoint="localhost-k8s-whisker--b57dd5bd--wgdx5-eth0" Jan 28 01:26:06.986991 containerd[1624]: 2026-01-28 01:26:06.648 [INFO][5100] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb" Namespace="calico-system" Pod="whisker-b57dd5bd-wgdx5" WorkloadEndpoint="localhost-k8s-whisker--b57dd5bd--wgdx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--b57dd5bd--wgdx5-eth0", GenerateName:"whisker-b57dd5bd-", Namespace:"calico-system", SelfLink:"", UID:"fe814487-9d96-47c2-af16-f4ab9eb63844", ResourceVersion:"1290", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b57dd5bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb", Pod:"whisker-b57dd5bd-wgdx5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali433d940df88", MAC:"c2:39:60:94:cb:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:26:06.986991 containerd[1624]: 2026-01-28 01:26:06.734 [INFO][5100] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb" Namespace="calico-system" Pod="whisker-b57dd5bd-wgdx5" WorkloadEndpoint="localhost-k8s-whisker--b57dd5bd--wgdx5-eth0" Jan 28 01:26:06.986991 containerd[1624]: time="2026-01-28T01:26:06.961204550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qs5tp,Uid:28ecd93a-a70c-43e2-8c1d-5a8ef124c4d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f\"" Jan 28 01:26:07.061252 kernel: audit: type=1300 audit(1769563566.933:664): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdf2e48180 a2=0 a3=7ffdf2e4816c items=0 ppid=3100 pid=5715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:06.933000 audit[5715]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdf2e48180 a2=0 a3=7ffdf2e4816c items=0 ppid=3100 pid=5715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:07.061469 containerd[1624]: time="2026-01-28T01:26:07.042399215Z" level=info msg="CreateContainer within sandbox \"c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:26:07.077993 systemd[1]: Started cri-containerd-f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587.scope - libcontainer container f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587. Jan 28 01:26:06.933000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:07.152565 kernel: audit: type=1327 audit(1769563566.933:664): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:07.173174 containerd[1624]: time="2026-01-28T01:26:07.169427528Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:06.991000 audit[5715]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=5715 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:07.226430 kernel: audit: type=1325 audit(1769563566.991:665): table=nat:126 family=2 entries=14 op=nft_register_rule pid=5715 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:07.321577 kernel: audit: type=1300 audit(1769563566.991:665): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdf2e48180 a2=0 a3=0 items=0 ppid=3100 pid=5715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:06.991000 audit[5715]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdf2e48180 a2=0 a3=0 items=0 ppid=3100 pid=5715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:06.991000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:07.393446 kubelet[2995]: E0128 01:26:07.365850 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:07.394527 kernel: audit: type=1327 audit(1769563566.991:665): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:07.416758 containerd[1624]: time="2026-01-28T01:26:07.248493908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:07.416758 containerd[1624]: time="2026-01-28T01:26:07.249605008Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:26:07.419118 kubelet[2995]: E0128 01:26:07.418817 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:26:07.419118 kubelet[2995]: E0128 01:26:07.418920 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:26:07.422228 kubelet[2995]: E0128 01:26:07.420562 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptffd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-g77nq_calico-system(8c19397c-299f-4305-bb7b-810de8e940fe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:07.424843 kubelet[2995]: E0128 01:26:07.424162 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:26:07.445988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2377578686.mount: Deactivated successfully. Jan 28 01:26:07.487161 kubelet[2995]: E0128 01:26:07.486829 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:26:07.495124 containerd[1624]: time="2026-01-28T01:26:07.494307976Z" level=info msg="Container b90ef29831aae74d271640a76bc0ba32b6d4e5feabd32ab24fa254f40b3aab40: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:26:07.495973 kubelet[2995]: E0128 01:26:07.489776 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:26:07.525532 kubelet[2995]: E0128 01:26:07.524852 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:26:07.607270 containerd[1624]: time="2026-01-28T01:26:07.603418900Z" level=info msg="CreateContainer within sandbox \"c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b90ef29831aae74d271640a76bc0ba32b6d4e5feabd32ab24fa254f40b3aab40\"" Jan 28 01:26:07.607270 containerd[1624]: time="2026-01-28T01:26:07.606274840Z" level=info msg="StartContainer for \"b90ef29831aae74d271640a76bc0ba32b6d4e5feabd32ab24fa254f40b3aab40\"" Jan 28 01:26:07.611584 containerd[1624]: time="2026-01-28T01:26:07.608497486Z" level=info msg="connecting to shim 18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb" address="unix:///run/containerd/s/02620d943b7882f8528ef88b78045ece62ed69ddf412c30e0634b0d0eaddbd3d" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:26:07.621204 containerd[1624]: time="2026-01-28T01:26:07.620573840Z" level=info msg="connecting to shim b90ef29831aae74d271640a76bc0ba32b6d4e5feabd32ab24fa254f40b3aab40" address="unix:///run/containerd/s/bb5cf5b70e5cfd9a7f81a4a5594c4108bb8eb0aca57c4625abeb773ff32873c0" protocol=ttrpc version=3 Jan 28 01:26:07.664000 audit: BPF prog-id=210 op=LOAD Jan 28 01:26:07.688153 kubelet[2995]: I0128 01:26:07.671825 2995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fh6hq" podStartSLOduration=193.671751318 podStartE2EDuration="3m13.671751318s" podCreationTimestamp="2026-01-28 01:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:26:07.564559667 +0000 UTC m=+196.233367322" watchObservedRunningTime="2026-01-28 01:26:07.671751318 +0000 UTC m=+196.340558973" Jan 28 01:26:07.678000 audit: BPF prog-id=211 op=LOAD Jan 28 01:26:07.703525 kernel: audit: type=1334 audit(1769563567.664:666): prog-id=210 op=LOAD Jan 28 01:26:07.703593 kernel: audit: type=1334 audit(1769563567.678:667): prog-id=211 op=LOAD Jan 28 01:26:07.738512 kernel: audit: type=1300 audit(1769563567.678:667): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5673 pid=5704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:07.678000 audit[5704]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5673 pid=5704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:07.723612 systemd-networkd[1503]: cali433d940df88: Gained IPv6LL Jan 28 01:26:07.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633356365363361343132306564636464336563323334633864626234 Jan 28 01:26:07.772722 kernel: audit: type=1327 audit(1769563567.678:667): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633356365363361343132306564636464336563323334633864626234 Jan 28 01:26:07.683000 audit: BPF prog-id=211 op=UNLOAD Jan 28 01:26:07.683000 audit[5704]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5673 pid=5704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:07.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633356365363361343132306564636464336563323334633864626234 Jan 28 01:26:07.690000 audit: BPF prog-id=212 op=LOAD Jan 28 01:26:07.690000 audit[5704]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5673 pid=5704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:07.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633356365363361343132306564636464336563323334633864626234 Jan 28 01:26:07.700000 audit: BPF prog-id=213 op=LOAD Jan 28 01:26:07.700000 audit[5704]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5673 pid=5704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:07.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633356365363361343132306564636464336563323334633864626234 Jan 28 01:26:07.700000 audit: BPF prog-id=213 op=UNLOAD Jan 28 01:26:07.700000 audit[5704]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5673 pid=5704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:07.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633356365363361343132306564636464336563323334633864626234 Jan 28 01:26:07.700000 audit: BPF prog-id=212 op=UNLOAD Jan 28 01:26:07.700000 audit[5704]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5673 pid=5704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:07.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633356365363361343132306564636464336563323334633864626234 Jan 28 01:26:07.700000 audit: BPF prog-id=214 op=LOAD Jan 28 01:26:07.700000 audit[5704]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5673 pid=5704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:07.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633356365363361343132306564636464336563323334633864626234 Jan 28 01:26:07.779000 audit[5766]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=5766 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:07.779000 audit[5766]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdeb9007e0 a2=0 a3=7ffdeb9007cc items=0 ppid=3100 pid=5766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:07.779000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:07.756399 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:26:07.906000 audit[5766]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=5766 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:07.906000 audit[5766]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdeb9007e0 a2=0 a3=0 items=0 ppid=3100 pid=5766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:07.906000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:08.124827 systemd[1]: Started cri-containerd-18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb.scope - libcontainer container 18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb. Jan 28 01:26:08.291745 systemd[1]: Started cri-containerd-b90ef29831aae74d271640a76bc0ba32b6d4e5feabd32ab24fa254f40b3aab40.scope - libcontainer container b90ef29831aae74d271640a76bc0ba32b6d4e5feabd32ab24fa254f40b3aab40. Jan 28 01:26:08.563000 audit: BPF prog-id=215 op=LOAD Jan 28 01:26:08.573000 audit: BPF prog-id=216 op=LOAD Jan 28 01:26:08.573000 audit[5760]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5589 pid=5760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:08.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239306566323938333161616537346432373136343061373662633062 Jan 28 01:26:08.590000 audit: BPF prog-id=216 op=UNLOAD Jan 28 01:26:08.590000 audit[5760]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5589 pid=5760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:08.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239306566323938333161616537346432373136343061373662633062 Jan 28 01:26:08.597000 audit: BPF prog-id=217 op=LOAD Jan 28 01:26:08.597000 audit[5760]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5589 pid=5760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:08.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239306566323938333161616537346432373136343061373662633062 Jan 28 01:26:08.618180 kubelet[2995]: E0128 01:26:08.616773 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:08.622707 kubelet[2995]: E0128 01:26:08.622542 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:26:08.632000 audit: BPF prog-id=218 op=LOAD Jan 28 01:26:08.649000 audit: BPF prog-id=219 op=LOAD Jan 28 01:26:08.649000 audit[5775]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5755 pid=5775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:08.649000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646466633666636239353864366166623238383966356232653132 Jan 28 01:26:08.660000 audit: BPF prog-id=219 op=UNLOAD Jan 28 01:26:08.660000 audit[5775]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5755 pid=5775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:08.660000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646466633666636239353864366166623238383966356232653132 Jan 28 01:26:08.665000 audit: BPF prog-id=220 op=LOAD Jan 28 01:26:08.665000 audit: BPF prog-id=221 op=LOAD Jan 28 01:26:08.665000 audit[5760]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5589 pid=5760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:08.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239306566323938333161616537346432373136343061373662633062 Jan 28 01:26:08.669000 audit: BPF prog-id=221 op=UNLOAD Jan 28 01:26:08.669000 audit[5760]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5589 pid=5760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:08.669000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239306566323938333161616537346432373136343061373662633062 Jan 28 01:26:08.670000 audit: BPF prog-id=217 op=UNLOAD Jan 28 01:26:08.665000 audit[5775]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5755 pid=5775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:08.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646466633666636239353864366166623238383966356232653132 Jan 28 01:26:08.670000 audit: BPF prog-id=222 op=LOAD Jan 28 01:26:08.670000 audit[5760]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5589 pid=5760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:08.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239306566323938333161616537346432373136343061373662633062 Jan 28 01:26:08.671000 audit: BPF prog-id=223 op=LOAD Jan 28 01:26:08.670000 audit[5775]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5755 pid=5775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:08.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646466633666636239353864366166623238383966356232653132 Jan 28 01:26:08.673000 audit: BPF prog-id=222 op=UNLOAD Jan 28 01:26:08.673000 audit[5775]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5755 pid=5775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:08.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646466633666636239353864366166623238383966356232653132 Jan 28 01:26:08.677000 audit: BPF prog-id=220 op=UNLOAD Jan 28 01:26:08.677000 audit[5775]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5755 pid=5775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:08.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646466633666636239353864366166623238383966356232653132 Jan 28 01:26:08.671000 audit[5760]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5589 pid=5760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:08.671000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239306566323938333161616537346432373136343061373662633062 Jan 28 01:26:08.679000 audit: BPF prog-id=224 op=LOAD Jan 28 01:26:08.679000 audit[5775]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5755 pid=5775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:08.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646466633666636239353864366166623238383966356232653132 Jan 28 01:26:08.712000 audit[5826]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=5826 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:08.712000 audit[5826]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff2fc03480 a2=0 a3=7fff2fc0346c items=0 ppid=3100 pid=5826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:08.712000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:08.729594 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:26:08.755956 containerd[1624]: time="2026-01-28T01:26:08.753849429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d85994946-hw4kg,Uid:2c2d5c47-2f4c-4dc2-af4d-d250680defb0,Namespace:calico-system,Attempt:0,} returns sandbox id \"f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587\"" Jan 28 01:26:08.757000 audit[5826]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=5826 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:08.757000 audit[5826]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff2fc03480 a2=0 a3=0 items=0 ppid=3100 pid=5826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:08.757000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:08.775479 containerd[1624]: time="2026-01-28T01:26:08.774804237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:26:08.874648 containerd[1624]: time="2026-01-28T01:26:08.874578110Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:08.882695 containerd[1624]: time="2026-01-28T01:26:08.882651797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:08.882913 containerd[1624]: time="2026-01-28T01:26:08.882876404Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:26:08.894244 kubelet[2995]: E0128 01:26:08.891388 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:26:08.894244 kubelet[2995]: E0128 01:26:08.891457 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:26:08.894244 kubelet[2995]: E0128 01:26:08.891611 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfmt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d85994946-hw4kg_calico-system(2c2d5c47-2f4c-4dc2-af4d-d250680defb0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:08.894829 kubelet[2995]: E0128 01:26:08.894706 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:26:09.059329 containerd[1624]: time="2026-01-28T01:26:09.042538526Z" level=info msg="StartContainer for \"b90ef29831aae74d271640a76bc0ba32b6d4e5feabd32ab24fa254f40b3aab40\" returns successfully" Jan 28 01:26:09.609971 kubelet[2995]: E0128 01:26:09.609919 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:26:09.634342 kubelet[2995]: E0128 01:26:09.633459 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:09.955667 containerd[1624]: time="2026-01-28T01:26:09.953258109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b57dd5bd-wgdx5,Uid:fe814487-9d96-47c2-af16-f4ab9eb63844,Namespace:calico-system,Attempt:0,} returns sandbox id \"18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb\"" Jan 28 01:26:10.021674 containerd[1624]: time="2026-01-28T01:26:10.016242578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:26:10.106679 kubelet[2995]: I0128 01:26:10.105962 2995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qs5tp" podStartSLOduration=196.105941409 podStartE2EDuration="3m16.105941409s" podCreationTimestamp="2026-01-28 01:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:26:10.094894929 +0000 UTC m=+198.763702594" watchObservedRunningTime="2026-01-28 01:26:10.105941409 +0000 UTC m=+198.774749064" Jan 28 01:26:10.279323 containerd[1624]: time="2026-01-28T01:26:10.277900364Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:10.293910 containerd[1624]: time="2026-01-28T01:26:10.293607657Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:26:10.293910 containerd[1624]: time="2026-01-28T01:26:10.293748339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:10.294255 kubelet[2995]: E0128 01:26:10.293960 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:26:10.294321 kubelet[2995]: E0128 01:26:10.294276 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:26:10.294612 kubelet[2995]: E0128 01:26:10.294446 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4880c63e305842ec869c2f1042d40e46,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5lwwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b57dd5bd-wgdx5_calico-system(fe814487-9d96-47c2-af16-f4ab9eb63844): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:10.301898 containerd[1624]: time="2026-01-28T01:26:10.301706507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:26:10.316000 audit[5857]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=5857 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:10.316000 audit[5857]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff322aa420 a2=0 a3=7fff322aa40c items=0 ppid=3100 pid=5857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:10.316000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:10.349000 audit[5857]: NETFILTER_CFG table=nat:132 family=2 entries=14 op=nft_register_rule pid=5857 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:10.349000 audit[5857]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff322aa420 a2=0 a3=0 items=0 ppid=3100 pid=5857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:10.349000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:10.406391 containerd[1624]: time="2026-01-28T01:26:10.406340207Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:10.411248 containerd[1624]: time="2026-01-28T01:26:10.411195890Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:26:10.412278 containerd[1624]: time="2026-01-28T01:26:10.411461295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:10.412525 kubelet[2995]: E0128 01:26:10.412397 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:26:10.412620 kubelet[2995]: E0128 01:26:10.412523 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:26:10.412831 kubelet[2995]: E0128 01:26:10.412703 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5lwwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b57dd5bd-wgdx5_calico-system(fe814487-9d96-47c2-af16-f4ab9eb63844): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:10.416577 kubelet[2995]: E0128 01:26:10.416405 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:26:10.524000 audit: BPF prog-id=225 op=LOAD Jan 28 01:26:10.524000 audit[5866]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffda20bf40 a2=98 a3=1fffffffffffffff items=0 ppid=5243 pid=5866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:10.524000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:26:10.524000 audit: BPF prog-id=225 op=UNLOAD Jan 28 01:26:10.524000 audit[5866]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffda20bf10 a3=0 items=0 ppid=5243 pid=5866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:10.524000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:26:10.528000 audit: BPF prog-id=226 op=LOAD Jan 28 01:26:10.528000 audit[5866]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffda20be20 a2=94 a3=3 items=0 ppid=5243 pid=5866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:10.528000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:26:10.528000 audit: BPF prog-id=226 op=UNLOAD Jan 28 01:26:10.528000 audit[5866]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffda20be20 a2=94 a3=3 items=0 ppid=5243 pid=5866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:10.528000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:26:10.528000 audit: BPF prog-id=227 op=LOAD Jan 28 01:26:10.528000 audit[5866]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffda20be60 a2=94 a3=7fffda20c040 items=0 ppid=5243 pid=5866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:10.528000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:26:10.528000 audit: BPF prog-id=227 op=UNLOAD Jan 28 01:26:10.528000 audit[5866]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffda20be60 a2=94 a3=7fffda20c040 items=0 ppid=5243 pid=5866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:10.528000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:26:10.544000 audit: BPF prog-id=228 op=LOAD Jan 28 01:26:10.544000 audit[5867]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc9fe876f0 a2=98 a3=3 items=0 ppid=5243 pid=5867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:10.544000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:26:10.544000 audit: BPF prog-id=228 op=UNLOAD Jan 28 01:26:10.544000 audit[5867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc9fe876c0 a3=0 items=0 ppid=5243 pid=5867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:10.544000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:26:10.549000 audit: BPF prog-id=229 op=LOAD Jan 28 01:26:10.549000 audit[5867]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc9fe874e0 a2=94 a3=54428f items=0 ppid=5243 pid=5867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:10.549000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:26:10.549000 audit: BPF prog-id=229 op=UNLOAD Jan 28 01:26:10.549000 audit[5867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc9fe874e0 a2=94 a3=54428f items=0 ppid=5243 pid=5867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:10.549000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:26:10.549000 audit: BPF prog-id=230 op=LOAD Jan 28 01:26:10.549000 audit[5867]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc9fe87510 a2=94 a3=2 items=0 ppid=5243 pid=5867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:10.549000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:26:10.549000 audit: BPF prog-id=230 op=UNLOAD Jan 28 01:26:10.549000 audit[5867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc9fe87510 a2=0 a3=2 items=0 ppid=5243 pid=5867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:10.549000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:26:10.687420 kubelet[2995]: E0128 01:26:10.681813 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:26:10.687420 kubelet[2995]: E0128 01:26:10.686497 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:26:10.693263 kubelet[2995]: E0128 01:26:10.692250 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:10.904000 audit[5870]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=5870 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:10.904000 audit[5870]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffa34b3ab0 a2=0 a3=7fffa34b3a9c items=0 ppid=3100 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:10.904000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:10.912000 audit[5870]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=5870 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:10.912000 audit[5870]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffa34b3ab0 a2=0 a3=0 items=0 ppid=3100 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:10.912000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:11.382000 audit: BPF prog-id=231 op=LOAD Jan 28 01:26:11.382000 audit[5867]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc9fe873d0 a2=94 a3=1 items=0 ppid=5243 pid=5867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:11.382000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:26:11.382000 audit: BPF prog-id=231 op=UNLOAD Jan 28 01:26:11.382000 audit[5867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc9fe873d0 a2=94 a3=1 items=0 ppid=5243 pid=5867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:11.382000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:26:11.425000 audit: BPF prog-id=232 op=LOAD Jan 28 01:26:11.425000 audit[5867]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc9fe873c0 a2=94 a3=4 items=0 ppid=5243 pid=5867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:11.425000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:26:11.425000 audit: BPF prog-id=232 op=UNLOAD Jan 28 01:26:11.425000 audit[5867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc9fe873c0 a2=0 a3=4 items=0 ppid=5243 pid=5867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:11.425000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:26:11.426000 audit: BPF prog-id=233 op=LOAD Jan 28 01:26:11.426000 audit[5867]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc9fe87220 a2=94 a3=5 items=0 ppid=5243 pid=5867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:11.426000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:26:11.426000 audit: BPF prog-id=233 op=UNLOAD Jan 28 01:26:11.426000 audit[5867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc9fe87220 a2=0 a3=5 items=0 ppid=5243 pid=5867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:11.426000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:26:11.426000 audit: BPF prog-id=234 op=LOAD Jan 28 01:26:11.426000 audit[5867]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc9fe87440 a2=94 a3=6 items=0 ppid=5243 pid=5867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:11.426000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:26:11.426000 audit: BPF prog-id=234 op=UNLOAD Jan 28 01:26:11.426000 audit[5867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc9fe87440 a2=0 a3=6 items=0 ppid=5243 pid=5867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:11.426000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:26:11.426000 audit: BPF prog-id=235 op=LOAD Jan 28 01:26:11.426000 audit[5867]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc9fe86bf0 a2=94 a3=88 items=0 ppid=5243 pid=5867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:11.426000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:26:11.436000 audit: BPF prog-id=236 op=LOAD Jan 28 01:26:11.436000 audit[5867]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc9fe86a70 a2=94 a3=2 items=0 ppid=5243 pid=5867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:11.436000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:26:11.436000 audit: BPF prog-id=236 op=UNLOAD Jan 28 01:26:11.436000 audit[5867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc9fe86aa0 a2=0 a3=7ffc9fe86ba0 items=0 ppid=5243 pid=5867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:11.436000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:26:11.437000 audit: BPF prog-id=235 op=UNLOAD Jan 28 01:26:11.437000 audit[5867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2964dd10 a2=0 a3=3640d71e6d548051 items=0 ppid=5243 pid=5867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:11.437000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:26:11.556000 audit: BPF prog-id=237 op=LOAD Jan 28 01:26:11.556000 audit[5873]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffb29e7140 a2=98 a3=1999999999999999 items=0 ppid=5243 pid=5873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:11.556000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:26:11.556000 audit: BPF prog-id=237 op=UNLOAD Jan 28 01:26:11.556000 audit[5873]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffb29e7110 a3=0 items=0 ppid=5243 pid=5873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:11.556000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:26:11.556000 audit: BPF prog-id=238 op=LOAD Jan 28 01:26:11.556000 audit[5873]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffb29e7020 a2=94 a3=ffff items=0 ppid=5243 pid=5873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:11.556000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:26:11.556000 audit: BPF prog-id=238 op=UNLOAD Jan 28 01:26:11.556000 audit[5873]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffb29e7020 a2=94 a3=ffff items=0 ppid=5243 pid=5873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:11.556000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:26:11.556000 audit: BPF prog-id=239 op=LOAD Jan 28 01:26:11.556000 audit[5873]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffb29e7060 a2=94 a3=7fffb29e7240 items=0 ppid=5243 pid=5873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:11.556000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:26:11.556000 audit: BPF prog-id=239 op=UNLOAD Jan 28 01:26:11.556000 audit[5873]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffb29e7060 a2=94 a3=7fffb29e7240 items=0 ppid=5243 pid=5873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:11.556000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:26:11.775513 kubelet[2995]: E0128 01:26:11.772653 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:11.782120 kubelet[2995]: E0128 01:26:11.781382 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:26:12.179318 kernel: kauditd_printk_skb: 176 callbacks suppressed Jan 28 01:26:12.179489 kernel: audit: type=1325 audit(1769563572.138:728): table=filter:135 family=2 entries=17 op=nft_register_rule pid=5886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:12.138000 audit[5886]: NETFILTER_CFG table=filter:135 family=2 entries=17 op=nft_register_rule pid=5886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:12.138000 audit[5886]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc8f855da0 a2=0 a3=7ffc8f855d8c items=0 ppid=3100 pid=5886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:12.260149 kernel: audit: type=1300 audit(1769563572.138:728): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc8f855da0 a2=0 a3=7ffc8f855d8c items=0 ppid=3100 pid=5886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:12.138000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:12.221000 audit[5886]: NETFILTER_CFG table=nat:136 family=2 entries=35 op=nft_register_chain pid=5886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:12.294738 kernel: audit: type=1327 audit(1769563572.138:728): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:12.294882 kernel: audit: type=1325 audit(1769563572.221:729): table=nat:136 family=2 entries=35 op=nft_register_chain pid=5886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:12.294920 kernel: audit: type=1300 audit(1769563572.221:729): arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc8f855da0 a2=0 a3=7ffc8f855d8c items=0 ppid=3100 pid=5886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:12.221000 audit[5886]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc8f855da0 a2=0 a3=7ffc8f855d8c items=0 ppid=3100 pid=5886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:12.221000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:12.358292 kernel: audit: type=1327 audit(1769563572.221:729): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:12.795255 kubelet[2995]: E0128 01:26:12.790866 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:13.305963 systemd-networkd[1503]: vxlan.calico: Link UP Jan 28 01:26:13.305980 systemd-networkd[1503]: vxlan.calico: Gained carrier Jan 28 01:26:13.727000 audit: BPF prog-id=240 op=LOAD Jan 28 01:26:13.748165 kernel: audit: type=1334 audit(1769563573.727:730): prog-id=240 op=LOAD Jan 28 01:26:13.727000 audit[5903]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd83737ee0 a2=98 a3=0 items=0 ppid=5243 pid=5903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:13.790678 kernel: audit: type=1300 audit(1769563573.727:730): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd83737ee0 a2=98 a3=0 items=0 ppid=5243 pid=5903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:13.727000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:26:13.867243 kernel: audit: type=1327 audit(1769563573.727:730): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:26:13.744000 audit: BPF prog-id=240 op=UNLOAD Jan 28 01:26:13.888498 kernel: audit: type=1334 audit(1769563573.744:731): prog-id=240 op=UNLOAD Jan 28 01:26:13.744000 audit[5903]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd83737eb0 a3=0 items=0 ppid=5243 pid=5903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:13.744000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:26:13.744000 audit: BPF prog-id=241 op=LOAD Jan 28 01:26:13.744000 audit[5903]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd83737cf0 a2=94 a3=54428f items=0 ppid=5243 pid=5903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:13.744000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:26:13.744000 audit: BPF prog-id=241 op=UNLOAD Jan 28 01:26:13.744000 audit[5903]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd83737cf0 a2=94 a3=54428f items=0 ppid=5243 pid=5903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:13.744000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:26:13.744000 audit: BPF prog-id=242 op=LOAD Jan 28 01:26:13.744000 audit[5903]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd83737d20 a2=94 a3=2 items=0 ppid=5243 pid=5903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:13.744000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:26:13.744000 audit: BPF prog-id=242 op=UNLOAD Jan 28 01:26:13.744000 audit[5903]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd83737d20 a2=0 a3=2 items=0 ppid=5243 pid=5903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:13.744000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:26:13.744000 audit: BPF prog-id=243 op=LOAD Jan 28 01:26:13.744000 audit[5903]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd83737ad0 a2=94 a3=4 items=0 ppid=5243 pid=5903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:13.744000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:26:13.744000 audit: BPF prog-id=243 op=UNLOAD Jan 28 01:26:13.744000 audit[5903]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd83737ad0 a2=94 a3=4 items=0 ppid=5243 pid=5903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:13.744000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:26:13.744000 audit: BPF prog-id=244 op=LOAD Jan 28 01:26:13.744000 audit[5903]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd83737bd0 a2=94 a3=7ffd83737d50 items=0 ppid=5243 pid=5903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:13.744000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:26:13.744000 audit: BPF prog-id=244 op=UNLOAD Jan 28 01:26:13.744000 audit[5903]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd83737bd0 a2=0 a3=7ffd83737d50 items=0 ppid=5243 pid=5903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:13.744000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:26:13.750000 audit: BPF prog-id=245 op=LOAD Jan 28 01:26:13.750000 audit[5903]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd83737300 a2=94 a3=2 items=0 ppid=5243 pid=5903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:13.750000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:26:13.750000 audit: BPF prog-id=245 op=UNLOAD Jan 28 01:26:13.750000 audit[5903]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd83737300 a2=0 a3=2 items=0 ppid=5243 pid=5903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:13.750000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:26:13.750000 audit: BPF prog-id=246 op=LOAD Jan 28 01:26:13.750000 audit[5903]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd83737400 a2=94 a3=30 items=0 ppid=5243 pid=5903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:13.750000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:26:13.924000 audit: BPF prog-id=247 op=LOAD Jan 28 01:26:13.924000 audit[5914]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff765739b0 a2=98 a3=0 items=0 ppid=5243 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:13.924000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:26:13.924000 audit: BPF prog-id=247 op=UNLOAD Jan 28 01:26:13.924000 audit[5914]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff76573980 a3=0 items=0 ppid=5243 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:13.924000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:26:13.925000 audit: BPF prog-id=248 op=LOAD Jan 28 01:26:13.925000 audit[5914]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff765737a0 a2=94 a3=54428f items=0 ppid=5243 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:13.925000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:26:13.925000 audit: BPF prog-id=248 op=UNLOAD Jan 28 01:26:13.925000 audit[5914]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff765737a0 a2=94 a3=54428f items=0 ppid=5243 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:13.925000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:26:13.925000 audit: BPF prog-id=249 op=LOAD Jan 28 01:26:13.925000 audit[5914]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff765737d0 a2=94 a3=2 items=0 ppid=5243 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:13.925000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:26:13.925000 audit: BPF prog-id=249 op=UNLOAD Jan 28 01:26:13.925000 audit[5914]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff765737d0 a2=0 a3=2 items=0 ppid=5243 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:13.925000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:26:14.730000 audit: BPF prog-id=250 op=LOAD Jan 28 01:26:14.730000 audit[5914]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff76573690 a2=94 a3=1 items=0 ppid=5243 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:14.730000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:26:14.730000 audit: BPF prog-id=250 op=UNLOAD Jan 28 01:26:14.730000 audit[5914]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff76573690 a2=94 a3=1 items=0 ppid=5243 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:14.730000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:26:14.757000 audit: BPF prog-id=251 op=LOAD Jan 28 01:26:14.757000 audit[5914]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff76573680 a2=94 a3=4 items=0 ppid=5243 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:14.757000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:26:14.757000 audit: BPF prog-id=251 op=UNLOAD Jan 28 01:26:14.757000 audit[5914]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff76573680 a2=0 a3=4 items=0 ppid=5243 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:14.757000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:26:14.758000 audit: BPF prog-id=252 op=LOAD Jan 28 01:26:14.758000 audit[5914]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff765734e0 a2=94 a3=5 items=0 ppid=5243 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:14.758000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:26:14.758000 audit: BPF prog-id=252 op=UNLOAD Jan 28 01:26:14.758000 audit[5914]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff765734e0 a2=0 a3=5 items=0 ppid=5243 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:14.758000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:26:14.758000 audit: BPF prog-id=253 op=LOAD Jan 28 01:26:14.758000 audit[5914]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff76573700 a2=94 a3=6 items=0 ppid=5243 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:14.758000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:26:14.758000 audit: BPF prog-id=253 op=UNLOAD Jan 28 01:26:14.758000 audit[5914]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff76573700 a2=0 a3=6 items=0 ppid=5243 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:14.758000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:26:14.759000 audit: BPF prog-id=254 op=LOAD Jan 28 01:26:14.759000 audit[5914]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff76572eb0 a2=94 a3=88 items=0 ppid=5243 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:14.759000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:26:14.759000 audit: BPF prog-id=255 op=LOAD Jan 28 01:26:14.759000 audit[5914]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff76572d30 a2=94 a3=2 items=0 ppid=5243 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:14.759000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:26:14.759000 audit: BPF prog-id=255 op=UNLOAD Jan 28 01:26:14.759000 audit[5914]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff76572d60 a2=0 a3=7fff76572e60 items=0 ppid=5243 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:14.759000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:26:14.760000 audit: BPF prog-id=254 op=UNLOAD Jan 28 01:26:14.760000 audit[5914]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=38af0d10 a2=0 a3=37fb75570eb0e1b items=0 ppid=5243 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:14.760000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:26:14.798000 audit: BPF prog-id=246 op=UNLOAD Jan 28 01:26:14.798000 audit[5243]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000d92100 a2=0 a3=0 items=0 ppid=5234 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:14.798000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 28 01:26:15.147182 systemd-networkd[1503]: vxlan.calico: Gained IPv6LL Jan 28 01:26:15.478000 audit[5943]: NETFILTER_CFG table=mangle:137 family=2 entries=16 op=nft_register_chain pid=5943 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:26:15.478000 audit[5943]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff414ea6d0 a2=0 a3=7fff414ea6bc items=0 ppid=5243 pid=5943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:15.478000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:26:15.504000 audit[5944]: NETFILTER_CFG table=nat:138 family=2 entries=15 op=nft_register_chain pid=5944 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:26:15.504000 audit[5944]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd7d9e0480 a2=0 a3=7ffd7d9e046c items=0 ppid=5243 pid=5944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:15.504000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:26:15.560000 audit[5942]: NETFILTER_CFG table=raw:139 family=2 entries=21 op=nft_register_chain pid=5942 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:26:15.560000 audit[5942]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffed05446e0 a2=0 a3=7ffed05446cc items=0 ppid=5243 pid=5942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:15.560000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:26:15.594000 audit[5946]: NETFILTER_CFG table=filter:140 family=2 entries=327 op=nft_register_chain pid=5946 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:26:15.594000 audit[5946]: SYSCALL arch=c000003e syscall=46 success=yes exit=193468 a0=3 a1=7fff32cc3e10 a2=0 a3=7fff32cc3dfc items=0 ppid=5243 pid=5946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:15.594000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:26:17.073259 containerd[1624]: time="2026-01-28T01:26:17.072949734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:26:17.316729 containerd[1624]: time="2026-01-28T01:26:17.314568822Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:17.329495 containerd[1624]: time="2026-01-28T01:26:17.329256771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:17.329974 containerd[1624]: time="2026-01-28T01:26:17.329916149Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:26:17.334300 kubelet[2995]: E0128 01:26:17.334229 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:26:17.334300 kubelet[2995]: E0128 01:26:17.334294 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:26:17.352250 kubelet[2995]: E0128 01:26:17.334474 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwzkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59889c77b-9msjb_calico-apiserver(63a937f8-f218-45d1-87c6-b75ad5fcad55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:17.352250 kubelet[2995]: E0128 01:26:17.336614 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:26:18.604248 kubelet[2995]: E0128 01:26:18.603551 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:18.991000 audit[5956]: NETFILTER_CFG table=filter:141 family=2 entries=14 op=nft_register_rule pid=5956 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:19.046228 kernel: kauditd_printk_skb: 104 callbacks suppressed Jan 28 01:26:19.046397 kernel: audit: type=1325 audit(1769563578.991:766): table=filter:141 family=2 entries=14 op=nft_register_rule pid=5956 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:18.991000 audit[5956]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff791cdba0 a2=0 a3=7fff791cdb8c items=0 ppid=3100 pid=5956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:19.147969 containerd[1624]: time="2026-01-28T01:26:19.134663617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:26:19.154263 kernel: audit: type=1300 audit(1769563578.991:766): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff791cdba0 a2=0 a3=7fff791cdb8c items=0 ppid=3100 pid=5956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:18.991000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:19.201750 kernel: audit: type=1327 audit(1769563578.991:766): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:19.311547 containerd[1624]: time="2026-01-28T01:26:19.303674012Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:19.318591 containerd[1624]: time="2026-01-28T01:26:19.318527105Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:26:19.318869 containerd[1624]: time="2026-01-28T01:26:19.318735743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:19.318000 audit[5956]: NETFILTER_CFG table=nat:142 family=2 entries=56 op=nft_register_chain pid=5956 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:19.385493 kubelet[2995]: E0128 01:26:19.343536 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:26:19.385493 kubelet[2995]: E0128 01:26:19.343606 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:26:19.385493 kubelet[2995]: E0128 01:26:19.343755 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4l6c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kgc2v_calico-system(845c6024-31b8-4f74-be49-c76c18f222f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:19.387489 kernel: audit: type=1325 audit(1769563579.318:767): table=nat:142 family=2 entries=56 op=nft_register_chain pid=5956 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:19.318000 audit[5956]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff791cdba0 a2=0 a3=7fff791cdb8c items=0 ppid=3100 pid=5956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:19.473407 kernel: audit: type=1300 audit(1769563579.318:767): arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff791cdba0 a2=0 a3=7fff791cdb8c items=0 ppid=3100 pid=5956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:19.473493 containerd[1624]: time="2026-01-28T01:26:19.402578643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:26:19.318000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:19.509233 kernel: audit: type=1327 audit(1769563579.318:767): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:19.682485 containerd[1624]: time="2026-01-28T01:26:19.680206152Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:19.725449 containerd[1624]: time="2026-01-28T01:26:19.723532093Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:26:19.725449 containerd[1624]: time="2026-01-28T01:26:19.723733247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:19.730330 kubelet[2995]: E0128 01:26:19.727597 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:26:19.730330 kubelet[2995]: E0128 01:26:19.727674 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:26:19.730330 kubelet[2995]: E0128 01:26:19.727897 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4l6c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kgc2v_calico-system(845c6024-31b8-4f74-be49-c76c18f222f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:19.730330 kubelet[2995]: E0128 01:26:19.729332 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:26:20.086160 containerd[1624]: time="2026-01-28T01:26:20.075561776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:26:20.228993 containerd[1624]: time="2026-01-28T01:26:20.228914056Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:20.247826 containerd[1624]: time="2026-01-28T01:26:20.247592374Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:26:20.247826 containerd[1624]: time="2026-01-28T01:26:20.247813716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:20.250368 kubelet[2995]: E0128 01:26:20.249926 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:26:20.268227 kubelet[2995]: E0128 01:26:20.250879 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:26:20.268227 kubelet[2995]: E0128 01:26:20.254895 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptffd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-g77nq_calico-system(8c19397c-299f-4305-bb7b-810de8e940fe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:20.269602 kubelet[2995]: E0128 01:26:20.269428 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:26:20.424268 kubelet[2995]: E0128 01:26:20.423901 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:21.089358 containerd[1624]: time="2026-01-28T01:26:21.084873719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:26:21.254233 containerd[1624]: time="2026-01-28T01:26:21.253506944Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:21.302260 containerd[1624]: time="2026-01-28T01:26:21.298505260Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:26:21.302260 containerd[1624]: time="2026-01-28T01:26:21.298632916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:21.313951 kubelet[2995]: E0128 01:26:21.311409 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:26:21.313951 kubelet[2995]: E0128 01:26:21.312795 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:26:21.347424 kubelet[2995]: E0128 01:26:21.325811 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfmt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d85994946-hw4kg_calico-system(2c2d5c47-2f4c-4dc2-af4d-d250680defb0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:21.360423 kubelet[2995]: E0128 01:26:21.360337 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:26:22.079604 containerd[1624]: time="2026-01-28T01:26:22.078457855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:26:22.194924 containerd[1624]: time="2026-01-28T01:26:22.194776216Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:22.201669 containerd[1624]: time="2026-01-28T01:26:22.201611747Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:26:22.201895 containerd[1624]: time="2026-01-28T01:26:22.201676758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:22.202649 kubelet[2995]: E0128 01:26:22.202353 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:26:22.202649 kubelet[2995]: E0128 01:26:22.202482 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:26:22.202782 kubelet[2995]: E0128 01:26:22.202660 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prlsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59889c77b-c5nrl_calico-apiserver(b1cfce8a-501a-4088-a990-12172f5320b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:22.205293 kubelet[2995]: E0128 01:26:22.204748 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:26:23.098949 containerd[1624]: time="2026-01-28T01:26:23.098554551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:26:23.233260 containerd[1624]: time="2026-01-28T01:26:23.232729194Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:23.265633 containerd[1624]: time="2026-01-28T01:26:23.265299805Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:26:23.265633 containerd[1624]: time="2026-01-28T01:26:23.265496911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:23.272226 kubelet[2995]: E0128 01:26:23.266539 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:26:23.272226 kubelet[2995]: E0128 01:26:23.266595 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:26:23.272226 kubelet[2995]: E0128 01:26:23.266726 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4880c63e305842ec869c2f1042d40e46,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5lwwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b57dd5bd-wgdx5_calico-system(fe814487-9d96-47c2-af16-f4ab9eb63844): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:23.273229 containerd[1624]: time="2026-01-28T01:26:23.271971320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:26:23.396279 containerd[1624]: time="2026-01-28T01:26:23.395243625Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:23.405643 containerd[1624]: time="2026-01-28T01:26:23.404581225Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:26:23.405643 containerd[1624]: time="2026-01-28T01:26:23.404721807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:23.406926 kubelet[2995]: E0128 01:26:23.406802 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:26:23.407211 kubelet[2995]: E0128 01:26:23.406929 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:26:23.409707 kubelet[2995]: E0128 01:26:23.409571 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5lwwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b57dd5bd-wgdx5_calico-system(fe814487-9d96-47c2-af16-f4ab9eb63844): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:23.417370 kubelet[2995]: E0128 01:26:23.410737 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:26:32.071171 kubelet[2995]: E0128 01:26:32.070556 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:26:32.073406 kubelet[2995]: E0128 01:26:32.073376 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:33.109395 kubelet[2995]: E0128 01:26:33.109331 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:26:33.123244 kubelet[2995]: E0128 01:26:33.120557 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:26:36.074244 kubelet[2995]: E0128 01:26:36.070828 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:26:37.088426 kubelet[2995]: E0128 01:26:37.080923 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:26:38.078859 kubelet[2995]: E0128 01:26:38.077903 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:26:40.065369 kubelet[2995]: E0128 01:26:40.059940 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:43.119320 containerd[1624]: time="2026-01-28T01:26:43.117771798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:26:43.257422 containerd[1624]: time="2026-01-28T01:26:43.254724548Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:43.285317 containerd[1624]: time="2026-01-28T01:26:43.277387426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:43.285317 containerd[1624]: time="2026-01-28T01:26:43.277994127Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:26:43.296718 kubelet[2995]: E0128 01:26:43.290966 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:26:43.297940 kubelet[2995]: E0128 01:26:43.297556 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:26:43.297940 kubelet[2995]: E0128 01:26:43.297874 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4l6c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kgc2v_calico-system(845c6024-31b8-4f74-be49-c76c18f222f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:43.375641 containerd[1624]: time="2026-01-28T01:26:43.375487536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:26:43.541588 containerd[1624]: time="2026-01-28T01:26:43.534339522Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:43.549752 containerd[1624]: time="2026-01-28T01:26:43.548766963Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:26:43.549752 containerd[1624]: time="2026-01-28T01:26:43.548898257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:43.557293 kubelet[2995]: E0128 01:26:43.555839 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:26:43.557293 kubelet[2995]: E0128 01:26:43.555914 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:26:43.557293 kubelet[2995]: E0128 01:26:43.556265 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4l6c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kgc2v_calico-system(845c6024-31b8-4f74-be49-c76c18f222f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:43.559890 kubelet[2995]: E0128 01:26:43.559428 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:26:45.074908 containerd[1624]: time="2026-01-28T01:26:45.074380962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:26:45.183777 containerd[1624]: time="2026-01-28T01:26:45.181925586Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:45.203669 containerd[1624]: time="2026-01-28T01:26:45.188969875Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:26:45.203669 containerd[1624]: time="2026-01-28T01:26:45.194170343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:45.204635 kubelet[2995]: E0128 01:26:45.203673 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:26:45.204635 kubelet[2995]: E0128 01:26:45.203724 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:26:45.208339 kubelet[2995]: E0128 01:26:45.205432 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptffd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-g77nq_calico-system(8c19397c-299f-4305-bb7b-810de8e940fe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:45.208579 containerd[1624]: time="2026-01-28T01:26:45.205875734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:26:45.230982 kubelet[2995]: E0128 01:26:45.211541 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:26:45.336584 containerd[1624]: time="2026-01-28T01:26:45.333955591Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:45.352980 containerd[1624]: time="2026-01-28T01:26:45.352600516Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:26:45.352980 containerd[1624]: time="2026-01-28T01:26:45.352799777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:45.355683 kubelet[2995]: E0128 01:26:45.354294 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:26:45.355683 kubelet[2995]: E0128 01:26:45.354374 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:26:45.355683 kubelet[2995]: E0128 01:26:45.354539 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwzkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59889c77b-9msjb_calico-apiserver(63a937f8-f218-45d1-87c6-b75ad5fcad55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:45.362369 kubelet[2995]: E0128 01:26:45.360359 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:26:52.663766 containerd[1624]: time="2026-01-28T01:26:52.663693357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:26:52.924282 kubelet[2995]: E0128 01:26:52.920941 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.764s" Jan 28 01:26:53.155955 containerd[1624]: time="2026-01-28T01:26:53.153668697Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:53.178697 containerd[1624]: time="2026-01-28T01:26:53.178454482Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:26:53.183433 containerd[1624]: time="2026-01-28T01:26:53.178870046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:53.269534 kubelet[2995]: E0128 01:26:53.266585 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:26:53.269534 kubelet[2995]: E0128 01:26:53.268741 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:26:53.304493 kubelet[2995]: E0128 01:26:53.304416 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prlsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59889c77b-c5nrl_calico-apiserver(b1cfce8a-501a-4088-a990-12172f5320b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:53.318510 containerd[1624]: time="2026-01-28T01:26:53.316613434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:26:53.323280 kubelet[2995]: E0128 01:26:53.319723 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:26:53.337793 kubelet[2995]: E0128 01:26:53.321497 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:53.364634 kubelet[2995]: E0128 01:26:53.364599 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:53.809423 containerd[1624]: time="2026-01-28T01:26:53.805553924Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:53.826460 containerd[1624]: time="2026-01-28T01:26:53.826397035Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:26:53.830304 containerd[1624]: time="2026-01-28T01:26:53.826949918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:53.830389 kubelet[2995]: E0128 01:26:53.826798 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:26:53.830389 kubelet[2995]: E0128 01:26:53.826861 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:26:53.839532 kubelet[2995]: E0128 01:26:53.837437 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4880c63e305842ec869c2f1042d40e46,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5lwwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b57dd5bd-wgdx5_calico-system(fe814487-9d96-47c2-af16-f4ab9eb63844): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:53.879591 containerd[1624]: time="2026-01-28T01:26:53.873494023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:26:54.305384 containerd[1624]: time="2026-01-28T01:26:54.277342364Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:54.305384 containerd[1624]: time="2026-01-28T01:26:54.290643557Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:26:54.305384 containerd[1624]: time="2026-01-28T01:26:54.291239026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:54.309237 kubelet[2995]: E0128 01:26:54.306814 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:26:54.309855 kubelet[2995]: E0128 01:26:54.309816 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:26:54.318612 kubelet[2995]: E0128 01:26:54.310802 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfmt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d85994946-hw4kg_calico-system(2c2d5c47-2f4c-4dc2-af4d-d250680defb0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:54.332238 kubelet[2995]: E0128 01:26:54.320679 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:26:54.342488 containerd[1624]: time="2026-01-28T01:26:54.332650144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:26:54.515237 containerd[1624]: time="2026-01-28T01:26:54.512546516Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:54.533325 containerd[1624]: time="2026-01-28T01:26:54.532431832Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:26:54.533325 containerd[1624]: time="2026-01-28T01:26:54.532554109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:54.536426 kubelet[2995]: E0128 01:26:54.534749 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:26:54.536426 kubelet[2995]: E0128 01:26:54.534809 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:26:54.544144 kubelet[2995]: E0128 01:26:54.534967 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5lwwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b57dd5bd-wgdx5_calico-system(fe814487-9d96-47c2-af16-f4ab9eb63844): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:54.552285 kubelet[2995]: E0128 01:26:54.550923 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:26:56.075957 kubelet[2995]: E0128 01:26:56.072904 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:56.086483 kubelet[2995]: E0128 01:26:56.082598 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:26:56.091189 kubelet[2995]: E0128 01:26:56.090894 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:26:56.133576 kubelet[2995]: E0128 01:26:56.133459 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:27:05.102981 kubelet[2995]: E0128 01:27:05.100897 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:27:07.096965 kubelet[2995]: E0128 01:27:07.095939 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:27:08.075338 kubelet[2995]: E0128 01:27:08.074730 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:27:08.078341 kubelet[2995]: E0128 01:27:08.077394 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:27:08.078341 kubelet[2995]: E0128 01:27:08.077513 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:27:10.160588 kubelet[2995]: E0128 01:27:10.156702 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:27:11.893967 containerd[1624]: time="2026-01-28T01:27:11.890975631Z" level=info msg="container event discarded" container=ff07cf07eb31cf63ce75395e89aca1bb795d16d252adbc8aa23df91c7f9cf1c8 type=CONTAINER_CREATED_EVENT Jan 28 01:27:11.920947 containerd[1624]: time="2026-01-28T01:27:11.919440047Z" level=info msg="container event discarded" container=ff07cf07eb31cf63ce75395e89aca1bb795d16d252adbc8aa23df91c7f9cf1c8 type=CONTAINER_STARTED_EVENT Jan 28 01:27:12.080303 containerd[1624]: time="2026-01-28T01:27:12.078387221Z" level=info msg="container event discarded" container=2debcf68800065793a848da76ffca59e49f3331b44a73e75487c6f1bf2c02ec5 type=CONTAINER_CREATED_EVENT Jan 28 01:27:12.080303 containerd[1624]: time="2026-01-28T01:27:12.078447474Z" level=info msg="container event discarded" container=2debcf68800065793a848da76ffca59e49f3331b44a73e75487c6f1bf2c02ec5 type=CONTAINER_STARTED_EVENT Jan 28 01:27:12.333826 containerd[1624]: time="2026-01-28T01:27:12.333271198Z" level=info msg="container event discarded" container=db4612bc9e5b7b0aa1690679ae751c0b84d64ddd2ef8384ab8cc5871481bd9b2 type=CONTAINER_CREATED_EVENT Jan 28 01:27:12.357308 containerd[1624]: time="2026-01-28T01:27:12.356453280Z" level=info msg="container event discarded" container=03470fbad2c7fceb2309185af06fa064e58fbc8274d3884b4127c74e9dd9889f type=CONTAINER_CREATED_EVENT Jan 28 01:27:12.357817 containerd[1624]: time="2026-01-28T01:27:12.357514127Z" level=info msg="container event discarded" container=03470fbad2c7fceb2309185af06fa064e58fbc8274d3884b4127c74e9dd9889f type=CONTAINER_STARTED_EVENT Jan 28 01:27:12.467914 containerd[1624]: time="2026-01-28T01:27:12.464739866Z" level=info msg="container event discarded" container=95a3159570827a077ec2ade362d11837d7594c5a8087c9822fbee29b4ac0835f type=CONTAINER_CREATED_EVENT Jan 28 01:27:12.629370 containerd[1624]: time="2026-01-28T01:27:12.629274528Z" level=info msg="container event discarded" container=66714d9fe0971c920b68e90921dd5a364dd9fe47f54dda900904d370d1797212 type=CONTAINER_CREATED_EVENT Jan 28 01:27:13.925916 containerd[1624]: time="2026-01-28T01:27:13.921233633Z" level=info msg="container event discarded" container=db4612bc9e5b7b0aa1690679ae751c0b84d64ddd2ef8384ab8cc5871481bd9b2 type=CONTAINER_STARTED_EVENT Jan 28 01:27:14.487494 containerd[1624]: time="2026-01-28T01:27:14.486924671Z" level=info msg="container event discarded" container=95a3159570827a077ec2ade362d11837d7594c5a8087c9822fbee29b4ac0835f type=CONTAINER_STARTED_EVENT Jan 28 01:27:14.490155 containerd[1624]: time="2026-01-28T01:27:14.490101228Z" level=info msg="container event discarded" container=66714d9fe0971c920b68e90921dd5a364dd9fe47f54dda900904d370d1797212 type=CONTAINER_STARTED_EVENT Jan 28 01:27:18.075685 kubelet[2995]: E0128 01:27:18.066888 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:27:20.069579 kubelet[2995]: E0128 01:27:20.065450 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:27:21.102493 kubelet[2995]: E0128 01:27:21.102353 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:27:21.130845 kubelet[2995]: E0128 01:27:21.130785 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:27:23.096307 kubelet[2995]: E0128 01:27:23.095959 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:27:23.116943 kubelet[2995]: E0128 01:27:23.111581 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:27:31.059742 kubelet[2995]: E0128 01:27:31.059435 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:27:32.075164 kubelet[2995]: E0128 01:27:32.062788 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:27:33.072583 kubelet[2995]: E0128 01:27:33.072531 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:27:34.069207 kubelet[2995]: E0128 01:27:34.064718 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:27:34.078976 containerd[1624]: time="2026-01-28T01:27:34.076320434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:27:34.183668 containerd[1624]: time="2026-01-28T01:27:34.180622140Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:27:34.187218 containerd[1624]: time="2026-01-28T01:27:34.185205727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 01:27:34.187218 containerd[1624]: time="2026-01-28T01:27:34.185363481Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:27:34.196338 kubelet[2995]: E0128 01:27:34.195152 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:27:34.196338 kubelet[2995]: E0128 01:27:34.195251 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:27:34.196338 kubelet[2995]: E0128 01:27:34.195434 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4l6c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kgc2v_calico-system(845c6024-31b8-4f74-be49-c76c18f222f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:34.200749 containerd[1624]: time="2026-01-28T01:27:34.199559229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:27:34.286450 containerd[1624]: time="2026-01-28T01:27:34.286182110Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:27:34.315362 containerd[1624]: time="2026-01-28T01:27:34.315244483Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:27:34.315545 containerd[1624]: time="2026-01-28T01:27:34.315424658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 01:27:34.318617 kubelet[2995]: E0128 01:27:34.316152 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:27:34.318617 kubelet[2995]: E0128 01:27:34.316263 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:27:34.318617 kubelet[2995]: E0128 01:27:34.316440 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4l6c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kgc2v_calico-system(845c6024-31b8-4f74-be49-c76c18f222f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:34.332164 kubelet[2995]: E0128 01:27:34.326652 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:27:35.087286 containerd[1624]: time="2026-01-28T01:27:35.077914367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:27:35.223135 containerd[1624]: time="2026-01-28T01:27:35.222882732Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:27:35.231807 containerd[1624]: time="2026-01-28T01:27:35.230350766Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:27:35.231807 containerd[1624]: time="2026-01-28T01:27:35.230406187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:27:35.232222 kubelet[2995]: E0128 01:27:35.230791 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:27:35.232222 kubelet[2995]: E0128 01:27:35.230856 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:27:35.238698 kubelet[2995]: E0128 01:27:35.231871 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwzkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59889c77b-9msjb_calico-apiserver(63a937f8-f218-45d1-87c6-b75ad5fcad55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:35.242947 kubelet[2995]: E0128 01:27:35.242609 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:27:37.064082 containerd[1624]: time="2026-01-28T01:27:37.063850787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:27:37.153261 containerd[1624]: time="2026-01-28T01:27:37.151210301Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:27:37.160748 containerd[1624]: time="2026-01-28T01:27:37.160617798Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:27:37.161265 containerd[1624]: time="2026-01-28T01:27:37.160890350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 01:27:37.161371 kubelet[2995]: E0128 01:27:37.160991 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:27:37.161371 kubelet[2995]: E0128 01:27:37.161287 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:27:37.169369 kubelet[2995]: E0128 01:27:37.161518 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4880c63e305842ec869c2f1042d40e46,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5lwwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b57dd5bd-wgdx5_calico-system(fe814487-9d96-47c2-af16-f4ab9eb63844): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:37.169630 containerd[1624]: time="2026-01-28T01:27:37.164981309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:27:37.245842 containerd[1624]: time="2026-01-28T01:27:37.245424951Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:27:37.258196 containerd[1624]: time="2026-01-28T01:27:37.257829103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 01:27:37.262440 containerd[1624]: time="2026-01-28T01:27:37.260741439Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:27:37.262579 kubelet[2995]: E0128 01:27:37.261724 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:27:37.262579 kubelet[2995]: E0128 01:27:37.261796 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:27:37.262579 kubelet[2995]: E0128 01:27:37.261987 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5lwwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b57dd5bd-wgdx5_calico-system(fe814487-9d96-47c2-af16-f4ab9eb63844): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:37.267370 kubelet[2995]: E0128 01:27:37.266267 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:27:38.081881 containerd[1624]: time="2026-01-28T01:27:38.081614221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:27:38.188179 containerd[1624]: time="2026-01-28T01:27:38.187900457Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:27:38.201814 containerd[1624]: time="2026-01-28T01:27:38.201670815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 01:27:38.201814 containerd[1624]: time="2026-01-28T01:27:38.201756025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:27:38.203919 kubelet[2995]: E0128 01:27:38.203340 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:27:38.203919 kubelet[2995]: E0128 01:27:38.203413 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:27:38.209585 kubelet[2995]: E0128 01:27:38.209359 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptffd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-g77nq_calico-system(8c19397c-299f-4305-bb7b-810de8e940fe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:38.223148 kubelet[2995]: E0128 01:27:38.219609 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:27:40.067470 kubelet[2995]: E0128 01:27:40.066668 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:27:44.064127 containerd[1624]: time="2026-01-28T01:27:44.063410216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:27:44.192269 containerd[1624]: time="2026-01-28T01:27:44.190621862Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:27:44.196449 containerd[1624]: time="2026-01-28T01:27:44.196274901Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:27:44.196449 containerd[1624]: time="2026-01-28T01:27:44.196398892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:27:44.197619 kubelet[2995]: E0128 01:27:44.197567 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:27:44.203323 kubelet[2995]: E0128 01:27:44.198358 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:27:44.203323 kubelet[2995]: E0128 01:27:44.199939 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prlsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59889c77b-c5nrl_calico-apiserver(b1cfce8a-501a-4088-a990-12172f5320b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:44.203323 kubelet[2995]: E0128 01:27:44.203148 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:27:44.203907 containerd[1624]: time="2026-01-28T01:27:44.198949063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:27:44.306247 containerd[1624]: time="2026-01-28T01:27:44.304455320Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:27:44.313887 containerd[1624]: time="2026-01-28T01:27:44.311811372Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:27:44.315163 containerd[1624]: time="2026-01-28T01:27:44.314371180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 01:27:44.316525 kubelet[2995]: E0128 01:27:44.316424 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:27:44.316633 kubelet[2995]: E0128 01:27:44.316542 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:27:44.317391 kubelet[2995]: E0128 01:27:44.316782 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfmt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d85994946-hw4kg_calico-system(2c2d5c47-2f4c-4dc2-af4d-d250680defb0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:44.318492 kubelet[2995]: E0128 01:27:44.318355 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:27:45.106271 kubelet[2995]: E0128 01:27:45.104966 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:27:50.085294 kubelet[2995]: E0128 01:27:50.083642 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:27:51.073662 kubelet[2995]: E0128 01:27:51.073416 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:27:52.061629 kubelet[2995]: E0128 01:27:52.060355 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:27:52.090256 kubelet[2995]: E0128 01:27:52.085340 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:27:54.088850 kubelet[2995]: E0128 01:27:54.087647 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:27:56.069727 kubelet[2995]: E0128 01:27:56.068885 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:27:56.894417 containerd[1624]: time="2026-01-28T01:27:56.894295951Z" level=info msg="container event discarded" container=d907afcdb91474de53729cd16de4ba304f11134f16961dcff87235338443d918 type=CONTAINER_CREATED_EVENT Jan 28 01:27:56.894417 containerd[1624]: time="2026-01-28T01:27:56.894362906Z" level=info msg="container event discarded" container=d907afcdb91474de53729cd16de4ba304f11134f16961dcff87235338443d918 type=CONTAINER_STARTED_EVENT Jan 28 01:27:57.507651 containerd[1624]: time="2026-01-28T01:27:57.507255270Z" level=info msg="container event discarded" container=2f3fe30b5a7a11877b018c9352e1dbe04e1446bcdf93f4329258310a6d900f22 type=CONTAINER_CREATED_EVENT Jan 28 01:27:58.789497 containerd[1624]: time="2026-01-28T01:27:58.788177373Z" level=info msg="container event discarded" container=2f3fe30b5a7a11877b018c9352e1dbe04e1446bcdf93f4329258310a6d900f22 type=CONTAINER_STARTED_EVENT Jan 28 01:27:59.079695 kubelet[2995]: E0128 01:27:59.079357 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:28:00.126893 kubelet[2995]: E0128 01:28:00.126318 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:28:00.834498 containerd[1624]: time="2026-01-28T01:28:00.833172290Z" level=info msg="container event discarded" container=12249ecd079ba8e7facdd83abe00a37c064b0f02369864baa816dd930c523b39 type=CONTAINER_CREATED_EVENT Jan 28 01:28:00.834498 containerd[1624]: time="2026-01-28T01:28:00.833270212Z" level=info msg="container event discarded" container=12249ecd079ba8e7facdd83abe00a37c064b0f02369864baa816dd930c523b39 type=CONTAINER_STARTED_EVENT Jan 28 01:28:02.071474 kubelet[2995]: E0128 01:28:02.070598 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:28:02.085211 kubelet[2995]: E0128 01:28:02.076320 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:28:03.085984 kubelet[2995]: E0128 01:28:03.085408 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:03.105439 kubelet[2995]: E0128 01:28:03.098538 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:28:07.062143 kubelet[2995]: E0128 01:28:07.061453 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:28:12.155377 kubelet[2995]: E0128 01:28:12.155283 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:28:12.158967 kubelet[2995]: E0128 01:28:12.158772 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:28:13.584947 containerd[1624]: time="2026-01-28T01:28:13.584877453Z" level=info msg="container event discarded" container=516a0f43159d67d16261409a30db2c8da990f4e4a5b172a2f39491a046095d15 type=CONTAINER_CREATED_EVENT Jan 28 01:28:14.130373 kubelet[2995]: E0128 01:28:14.125880 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:14.469861 containerd[1624]: time="2026-01-28T01:28:14.456744149Z" level=info msg="container event discarded" container=516a0f43159d67d16261409a30db2c8da990f4e4a5b172a2f39491a046095d15 type=CONTAINER_STARTED_EVENT Jan 28 01:28:16.984895 kubelet[2995]: E0128 01:28:16.982927 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.874s" Jan 28 01:28:16.995109 kubelet[2995]: E0128 01:28:16.991979 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:16.995477 kubelet[2995]: E0128 01:28:16.993843 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:28:16.995477 kubelet[2995]: E0128 01:28:16.993936 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:28:17.881468 kubelet[2995]: E0128 01:28:17.880817 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:28:33.357933 systemd[1]: cri-containerd-66714d9fe0971c920b68e90921dd5a364dd9fe47f54dda900904d370d1797212.scope: Deactivated successfully. Jan 28 01:28:33.413709 kernel: audit: type=1334 audit(1769563713.394:768): prog-id=256 op=LOAD Jan 28 01:28:33.414244 kernel: audit: type=1334 audit(1769563713.395:769): prog-id=108 op=UNLOAD Jan 28 01:28:33.394000 audit: BPF prog-id=256 op=LOAD Jan 28 01:28:33.395000 audit: BPF prog-id=108 op=UNLOAD Jan 28 01:28:33.387500 systemd[1]: cri-containerd-66714d9fe0971c920b68e90921dd5a364dd9fe47f54dda900904d370d1797212.scope: Consumed 19.425s CPU time, 70.8M memory peak, 9.5M read from disk. Jan 28 01:28:33.415396 kubelet[2995]: E0128 01:28:33.384682 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="16.272s" Jan 28 01:28:33.432482 kernel: audit: type=1334 audit(1769563713.395:770): prog-id=112 op=UNLOAD Jan 28 01:28:33.395000 audit: BPF prog-id=112 op=UNLOAD Jan 28 01:28:33.443454 kernel: audit: type=1334 audit(1769563713.396:771): prog-id=88 op=UNLOAD Jan 28 01:28:33.396000 audit: BPF prog-id=88 op=UNLOAD Jan 28 01:28:33.454215 kubelet[2995]: E0128 01:28:33.453309 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:28:33.463266 systemd[1]: cri-containerd-db4612bc9e5b7b0aa1690679ae751c0b84d64ddd2ef8384ab8cc5871481bd9b2.scope: Deactivated successfully. Jan 28 01:28:33.468250 systemd[1]: cri-containerd-db4612bc9e5b7b0aa1690679ae751c0b84d64ddd2ef8384ab8cc5871481bd9b2.scope: Consumed 13.492s CPU time, 34.1M memory peak, 9.3M read from disk. Jan 28 01:28:33.473000 audit: BPF prog-id=98 op=UNLOAD Jan 28 01:28:33.494169 kernel: audit: type=1334 audit(1769563713.473:772): prog-id=98 op=UNLOAD Jan 28 01:28:33.473000 audit: BPF prog-id=102 op=UNLOAD Jan 28 01:28:33.504862 kernel: audit: type=1334 audit(1769563713.473:773): prog-id=102 op=UNLOAD Jan 28 01:28:33.474000 audit: BPF prog-id=257 op=LOAD Jan 28 01:28:33.513833 containerd[1624]: time="2026-01-28T01:28:33.508806213Z" level=info msg="received container exit event container_id:\"db4612bc9e5b7b0aa1690679ae751c0b84d64ddd2ef8384ab8cc5871481bd9b2\" id:\"db4612bc9e5b7b0aa1690679ae751c0b84d64ddd2ef8384ab8cc5871481bd9b2\" pid:2815 exit_status:1 exited_at:{seconds:1769563713 nanos:473935039}" Jan 28 01:28:33.513833 containerd[1624]: time="2026-01-28T01:28:33.510528466Z" level=info msg="received container exit event container_id:\"66714d9fe0971c920b68e90921dd5a364dd9fe47f54dda900904d370d1797212\" id:\"66714d9fe0971c920b68e90921dd5a364dd9fe47f54dda900904d370d1797212\" pid:2842 exit_status:1 exited_at:{seconds:1769563713 nanos:448410647}" Jan 28 01:28:33.518899 kernel: audit: type=1334 audit(1769563713.474:774): prog-id=257 op=LOAD Jan 28 01:28:33.479000 audit: BPF prog-id=83 op=UNLOAD Jan 28 01:28:33.527189 kernel: audit: type=1334 audit(1769563713.479:775): prog-id=83 op=UNLOAD Jan 28 01:28:33.527264 kubelet[2995]: E0128 01:28:33.520141 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:28:33.527264 kubelet[2995]: E0128 01:28:33.520349 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:28:33.527264 kubelet[2995]: E0128 01:28:33.520458 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:28:33.598777 kubelet[2995]: E0128 01:28:33.598424 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:28:33.652736 kubelet[2995]: E0128 01:28:33.648424 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:28:34.595261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db4612bc9e5b7b0aa1690679ae751c0b84d64ddd2ef8384ab8cc5871481bd9b2-rootfs.mount: Deactivated successfully. Jan 28 01:28:34.685920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66714d9fe0971c920b68e90921dd5a364dd9fe47f54dda900904d370d1797212-rootfs.mount: Deactivated successfully. Jan 28 01:28:35.015125 kubelet[2995]: E0128 01:28:35.008332 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:35.685778 kubelet[2995]: I0128 01:28:35.684733 2995 scope.go:117] "RemoveContainer" containerID="66714d9fe0971c920b68e90921dd5a364dd9fe47f54dda900904d370d1797212" Jan 28 01:28:35.685778 kubelet[2995]: E0128 01:28:35.684909 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:35.723249 containerd[1624]: time="2026-01-28T01:28:35.720787123Z" level=info msg="CreateContainer within sandbox \"03470fbad2c7fceb2309185af06fa064e58fbc8274d3884b4127c74e9dd9889f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 28 01:28:35.779865 kubelet[2995]: I0128 01:28:35.767948 2995 scope.go:117] "RemoveContainer" containerID="db4612bc9e5b7b0aa1690679ae751c0b84d64ddd2ef8384ab8cc5871481bd9b2" Jan 28 01:28:35.784793 kubelet[2995]: E0128 01:28:35.784492 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:35.864847 containerd[1624]: time="2026-01-28T01:28:35.864699536Z" level=info msg="CreateContainer within sandbox \"ff07cf07eb31cf63ce75395e89aca1bb795d16d252adbc8aa23df91c7f9cf1c8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 28 01:28:35.933968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount322932996.mount: Deactivated successfully. Jan 28 01:28:35.987699 containerd[1624]: time="2026-01-28T01:28:35.986323155Z" level=info msg="Container c06cf8ae82bdcf7cdece30a276a0a70d921c82f263ccd3a406f48b98e22886cf: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:28:36.000751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2450933204.mount: Deactivated successfully. Jan 28 01:28:36.032573 containerd[1624]: time="2026-01-28T01:28:36.029964289Z" level=info msg="Container f91623b34d6b1235bb31655ab84d6d346cdbdbc89c295e96017342940666b977: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:28:36.070544 containerd[1624]: time="2026-01-28T01:28:36.070401734Z" level=info msg="CreateContainer within sandbox \"03470fbad2c7fceb2309185af06fa064e58fbc8274d3884b4127c74e9dd9889f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c06cf8ae82bdcf7cdece30a276a0a70d921c82f263ccd3a406f48b98e22886cf\"" Jan 28 01:28:36.080152 containerd[1624]: time="2026-01-28T01:28:36.077382706Z" level=info msg="StartContainer for \"c06cf8ae82bdcf7cdece30a276a0a70d921c82f263ccd3a406f48b98e22886cf\"" Jan 28 01:28:36.080152 containerd[1624]: time="2026-01-28T01:28:36.079199541Z" level=info msg="connecting to shim c06cf8ae82bdcf7cdece30a276a0a70d921c82f263ccd3a406f48b98e22886cf" address="unix:///run/containerd/s/c38d794bdb9b2efe1074e8d69e0448e6a704ea4368b55926dec67c4143092ac4" protocol=ttrpc version=3 Jan 28 01:28:36.112198 containerd[1624]: time="2026-01-28T01:28:36.111883764Z" level=info msg="CreateContainer within sandbox \"ff07cf07eb31cf63ce75395e89aca1bb795d16d252adbc8aa23df91c7f9cf1c8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"f91623b34d6b1235bb31655ab84d6d346cdbdbc89c295e96017342940666b977\"" Jan 28 01:28:36.138272 containerd[1624]: time="2026-01-28T01:28:36.137702483Z" level=info msg="StartContainer for \"f91623b34d6b1235bb31655ab84d6d346cdbdbc89c295e96017342940666b977\"" Jan 28 01:28:36.161624 containerd[1624]: time="2026-01-28T01:28:36.161517742Z" level=info msg="connecting to shim f91623b34d6b1235bb31655ab84d6d346cdbdbc89c295e96017342940666b977" address="unix:///run/containerd/s/1ba2154d6325ff07a5f2e4e2b293ec2caf8e0ded59a4285b06f87193516c717f" protocol=ttrpc version=3 Jan 28 01:28:36.261465 systemd[1]: Started cri-containerd-c06cf8ae82bdcf7cdece30a276a0a70d921c82f263ccd3a406f48b98e22886cf.scope - libcontainer container c06cf8ae82bdcf7cdece30a276a0a70d921c82f263ccd3a406f48b98e22886cf. Jan 28 01:28:36.407747 systemd[1]: Started cri-containerd-f91623b34d6b1235bb31655ab84d6d346cdbdbc89c295e96017342940666b977.scope - libcontainer container f91623b34d6b1235bb31655ab84d6d346cdbdbc89c295e96017342940666b977. Jan 28 01:28:36.431000 audit: BPF prog-id=258 op=LOAD Jan 28 01:28:36.439000 audit: BPF prog-id=259 op=LOAD Jan 28 01:28:36.480190 kernel: audit: type=1334 audit(1769563716.431:776): prog-id=258 op=LOAD Jan 28 01:28:36.480337 kernel: audit: type=1334 audit(1769563716.439:777): prog-id=259 op=LOAD Jan 28 01:28:36.439000 audit[6206]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2673 pid=6206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:28:36.439000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330366366386165383262646366376364656365333061323736613061 Jan 28 01:28:36.439000 audit: BPF prog-id=259 op=UNLOAD Jan 28 01:28:36.439000 audit[6206]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2673 pid=6206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:28:36.439000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330366366386165383262646366376364656365333061323736613061 Jan 28 01:28:36.440000 audit: BPF prog-id=260 op=LOAD Jan 28 01:28:36.440000 audit[6206]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2673 pid=6206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:28:36.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330366366386165383262646366376364656365333061323736613061 Jan 28 01:28:36.440000 audit: BPF prog-id=261 op=LOAD Jan 28 01:28:36.440000 audit[6206]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2673 pid=6206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:28:36.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330366366386165383262646366376364656365333061323736613061 Jan 28 01:28:36.440000 audit: BPF prog-id=261 op=UNLOAD Jan 28 01:28:36.440000 audit[6206]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2673 pid=6206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:28:36.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330366366386165383262646366376364656365333061323736613061 Jan 28 01:28:36.440000 audit: BPF prog-id=260 op=UNLOAD Jan 28 01:28:36.440000 audit[6206]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2673 pid=6206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:28:36.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330366366386165383262646366376364656365333061323736613061 Jan 28 01:28:36.440000 audit: BPF prog-id=262 op=LOAD Jan 28 01:28:36.440000 audit[6206]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2673 pid=6206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:28:36.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330366366386165383262646366376364656365333061323736613061 Jan 28 01:28:36.536000 audit: BPF prog-id=263 op=LOAD Jan 28 01:28:36.537000 audit: BPF prog-id=264 op=LOAD Jan 28 01:28:36.537000 audit[6218]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2677 pid=6218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:28:36.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639313632336233346436623132333562623331363535616238346436 Jan 28 01:28:36.537000 audit: BPF prog-id=264 op=UNLOAD Jan 28 01:28:36.537000 audit[6218]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2677 pid=6218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:28:36.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639313632336233346436623132333562623331363535616238346436 Jan 28 01:28:36.538000 audit: BPF prog-id=265 op=LOAD Jan 28 01:28:36.538000 audit[6218]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2677 pid=6218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:28:36.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639313632336233346436623132333562623331363535616238346436 Jan 28 01:28:36.538000 audit: BPF prog-id=266 op=LOAD Jan 28 01:28:36.538000 audit[6218]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2677 pid=6218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:28:36.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639313632336233346436623132333562623331363535616238346436 Jan 28 01:28:36.538000 audit: BPF prog-id=266 op=UNLOAD Jan 28 01:28:36.538000 audit[6218]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2677 pid=6218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:28:36.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639313632336233346436623132333562623331363535616238346436 Jan 28 01:28:36.556000 audit: BPF prog-id=265 op=UNLOAD Jan 28 01:28:36.556000 audit[6218]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2677 pid=6218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:28:36.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639313632336233346436623132333562623331363535616238346436 Jan 28 01:28:36.559000 audit: BPF prog-id=267 op=LOAD Jan 28 01:28:36.559000 audit[6218]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2677 pid=6218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:28:36.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639313632336233346436623132333562623331363535616238346436 Jan 28 01:28:37.082757 containerd[1624]: time="2026-01-28T01:28:37.081757803Z" level=info msg="StartContainer for \"c06cf8ae82bdcf7cdece30a276a0a70d921c82f263ccd3a406f48b98e22886cf\" returns successfully" Jan 28 01:28:37.159961 containerd[1624]: time="2026-01-28T01:28:37.159915280Z" level=info msg="StartContainer for \"f91623b34d6b1235bb31655ab84d6d346cdbdbc89c295e96017342940666b977\" returns successfully" Jan 28 01:28:37.981188 kubelet[2995]: E0128 01:28:37.979887 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:38.005153 kubelet[2995]: E0128 01:28:38.003694 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:39.012178 kubelet[2995]: E0128 01:28:39.010772 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:41.830100 kubelet[2995]: E0128 01:28:41.828725 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:44.060141 kubelet[2995]: E0128 01:28:44.059815 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:45.106227 kubelet[2995]: E0128 01:28:45.104804 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:28:45.110754 kubelet[2995]: E0128 01:28:45.110655 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:28:46.068666 kubelet[2995]: E0128 01:28:46.061876 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:28:46.068666 kubelet[2995]: E0128 01:28:46.068483 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:28:47.401232 kubelet[2995]: E0128 01:28:47.392129 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:48.071172 kubelet[2995]: E0128 01:28:48.067743 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:28:48.071973 kubelet[2995]: E0128 01:28:48.071923 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:28:48.192124 kubelet[2995]: E0128 01:28:48.191154 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:49.314293 kubelet[2995]: E0128 01:28:49.312868 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:50.948252 containerd[1624]: time="2026-01-28T01:28:50.942332287Z" level=info msg="container event discarded" container=b1e294b9cab2dfd134181e85da8c01ccfb64eed1b566222f7cb5f26a846ff1f6 type=CONTAINER_CREATED_EVENT Jan 28 01:28:50.948252 containerd[1624]: time="2026-01-28T01:28:50.942703158Z" level=info msg="container event discarded" container=b1e294b9cab2dfd134181e85da8c01ccfb64eed1b566222f7cb5f26a846ff1f6 type=CONTAINER_STARTED_EVENT Jan 28 01:28:50.948252 containerd[1624]: time="2026-01-28T01:28:50.942722665Z" level=info msg="container event discarded" container=f51c35e06fd03b0092d6c6d4c751a427c78b1d590f68d3abde526efe0dfc1374 type=CONTAINER_CREATED_EVENT Jan 28 01:28:50.948252 containerd[1624]: time="2026-01-28T01:28:50.942735178Z" level=info msg="container event discarded" container=f51c35e06fd03b0092d6c6d4c751a427c78b1d590f68d3abde526efe0dfc1374 type=CONTAINER_STARTED_EVENT Jan 28 01:28:51.875296 kubelet[2995]: E0128 01:28:51.873937 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:54.068601 kubelet[2995]: E0128 01:28:54.067900 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:55.066498 kubelet[2995]: E0128 01:28:55.066459 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:58.063333 kubelet[2995]: E0128 01:28:58.062613 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:28:59.070596 kubelet[2995]: E0128 01:28:59.068202 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:59.622942 containerd[1624]: time="2026-01-28T01:28:59.621962295Z" level=info msg="container event discarded" container=6b072554be0c2642ddac2b83d7e5d4ab98bb4d2ea073283197f1b314d79b2389 type=CONTAINER_CREATED_EVENT Jan 28 01:29:00.082884 containerd[1624]: time="2026-01-28T01:29:00.081641562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:29:00.229643 containerd[1624]: time="2026-01-28T01:29:00.229406362Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:29:00.243529 containerd[1624]: time="2026-01-28T01:29:00.242694209Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:29:00.243529 containerd[1624]: time="2026-01-28T01:29:00.242816376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 01:29:00.253592 kubelet[2995]: E0128 01:29:00.247932 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:29:00.269175 kubelet[2995]: E0128 01:29:00.268223 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:29:00.269515 kubelet[2995]: E0128 01:29:00.269384 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4l6c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kgc2v_calico-system(845c6024-31b8-4f74-be49-c76c18f222f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:29:00.270790 containerd[1624]: time="2026-01-28T01:29:00.270679468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:29:00.348633 containerd[1624]: time="2026-01-28T01:29:00.348249138Z" level=info msg="container event discarded" container=6b072554be0c2642ddac2b83d7e5d4ab98bb4d2ea073283197f1b314d79b2389 type=CONTAINER_STARTED_EVENT Jan 28 01:29:00.375434 containerd[1624]: time="2026-01-28T01:29:00.374678195Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:29:00.394308 containerd[1624]: time="2026-01-28T01:29:00.392164608Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:29:00.394308 containerd[1624]: time="2026-01-28T01:29:00.392422608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 01:29:00.394522 kubelet[2995]: E0128 01:29:00.392909 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:29:00.394522 kubelet[2995]: E0128 01:29:00.393137 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:29:00.394522 kubelet[2995]: E0128 01:29:00.393411 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4880c63e305842ec869c2f1042d40e46,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5lwwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b57dd5bd-wgdx5_calico-system(fe814487-9d96-47c2-af16-f4ab9eb63844): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:29:00.399258 containerd[1624]: time="2026-01-28T01:29:00.397515615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:29:00.523200 containerd[1624]: time="2026-01-28T01:29:00.519175919Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:29:00.545172 containerd[1624]: time="2026-01-28T01:29:00.543504194Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:29:00.545172 containerd[1624]: time="2026-01-28T01:29:00.543655355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 01:29:00.545381 kubelet[2995]: E0128 01:29:00.543799 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:29:00.545381 kubelet[2995]: E0128 01:29:00.543860 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:29:00.545381 kubelet[2995]: E0128 01:29:00.544253 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4l6c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kgc2v_calico-system(845c6024-31b8-4f74-be49-c76c18f222f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:29:00.547406 kubelet[2995]: E0128 01:29:00.546362 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:29:00.551837 containerd[1624]: time="2026-01-28T01:29:00.548919109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:29:00.672529 containerd[1624]: time="2026-01-28T01:29:00.672332221Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:29:00.689236 containerd[1624]: time="2026-01-28T01:29:00.684613426Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:29:00.689236 containerd[1624]: time="2026-01-28T01:29:00.684746283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 01:29:00.689408 kubelet[2995]: E0128 01:29:00.686709 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:29:00.689408 kubelet[2995]: E0128 01:29:00.686767 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:29:00.689408 kubelet[2995]: E0128 01:29:00.686908 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5lwwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b57dd5bd-wgdx5_calico-system(fe814487-9d96-47c2-af16-f4ab9eb63844): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:29:00.703283 kubelet[2995]: E0128 01:29:00.698835 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:29:00.889330 containerd[1624]: time="2026-01-28T01:29:00.888453921Z" level=info msg="container event discarded" container=cd5a69140ab54e712ccf832ebab05a9a79dd750e218ccfd63282e2c8030af874 type=CONTAINER_CREATED_EVENT Jan 28 01:29:01.095607 kubelet[2995]: E0128 01:29:01.082571 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:29:01.100158 containerd[1624]: time="2026-01-28T01:29:01.082961110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:29:01.198504 containerd[1624]: time="2026-01-28T01:29:01.198445059Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:29:01.205847 containerd[1624]: time="2026-01-28T01:29:01.205792664Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:29:01.208263 containerd[1624]: time="2026-01-28T01:29:01.206356675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:29:01.208770 kubelet[2995]: E0128 01:29:01.208722 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:29:01.221492 kubelet[2995]: E0128 01:29:01.209735 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:29:01.227561 kubelet[2995]: E0128 01:29:01.224929 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwzkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59889c77b-9msjb_calico-apiserver(63a937f8-f218-45d1-87c6-b75ad5fcad55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:29:01.227561 kubelet[2995]: E0128 01:29:01.227397 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:29:01.229824 containerd[1624]: time="2026-01-28T01:29:01.228957381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:29:01.350310 containerd[1624]: time="2026-01-28T01:29:01.349839154Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:29:01.384557 containerd[1624]: time="2026-01-28T01:29:01.384345755Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:29:01.384557 containerd[1624]: time="2026-01-28T01:29:01.384515102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 01:29:01.391134 kubelet[2995]: E0128 01:29:01.386325 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:29:01.391134 kubelet[2995]: E0128 01:29:01.386383 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:29:01.391134 kubelet[2995]: E0128 01:29:01.386556 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptffd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-g77nq_calico-system(8c19397c-299f-4305-bb7b-810de8e940fe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:29:01.397635 kubelet[2995]: E0128 01:29:01.396223 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:29:01.563211 containerd[1624]: time="2026-01-28T01:29:01.562369155Z" level=info msg="container event discarded" container=cd5a69140ab54e712ccf832ebab05a9a79dd750e218ccfd63282e2c8030af874 type=CONTAINER_STARTED_EVENT Jan 28 01:29:02.003653 containerd[1624]: time="2026-01-28T01:29:02.000261505Z" level=info msg="container event discarded" container=cd5a69140ab54e712ccf832ebab05a9a79dd750e218ccfd63282e2c8030af874 type=CONTAINER_STOPPED_EVENT Jan 28 01:29:03.106962 kubelet[2995]: E0128 01:29:03.099589 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:07.766452 systemd[1]: Started sshd@9-10.0.0.61:22-10.0.0.1:47366.service - OpenSSH per-connection server daemon (10.0.0.1:47366). Jan 28 01:29:07.792214 kernel: kauditd_printk_skb: 42 callbacks suppressed Jan 28 01:29:07.792374 kernel: audit: type=1130 audit(1769563747.768:792): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.61:22-10.0.0.1:47366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:07.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.61:22-10.0.0.1:47366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:08.422000 audit[6330]: USER_ACCT pid=6330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:08.449653 sshd-session[6330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:29:08.453705 sshd[6330]: Accepted publickey for core from 10.0.0.1 port 47366 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:29:08.491836 kernel: audit: type=1101 audit(1769563748.422:793): pid=6330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:08.426000 audit[6330]: CRED_ACQ pid=6330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:08.561833 kernel: audit: type=1103 audit(1769563748.426:794): pid=6330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:08.564204 systemd-logind[1590]: New session 11 of user core. Jan 28 01:29:08.607164 kernel: audit: type=1006 audit(1769563748.426:795): pid=6330 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 28 01:29:08.607363 kernel: audit: type=1300 audit(1769563748.426:795): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff755e0340 a2=3 a3=0 items=0 ppid=1 pid=6330 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:29:08.426000 audit[6330]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff755e0340 a2=3 a3=0 items=0 ppid=1 pid=6330 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:29:08.652562 kernel: audit: type=1327 audit(1769563748.426:795): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:29:08.426000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:29:08.669670 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 01:29:08.708000 audit[6330]: USER_START pid=6330 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:08.804393 kernel: audit: type=1105 audit(1769563748.708:796): pid=6330 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:08.723000 audit[6336]: CRED_ACQ pid=6336 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:08.851291 kernel: audit: type=1103 audit(1769563748.723:797): pid=6336 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:09.166850 containerd[1624]: time="2026-01-28T01:29:09.151372341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:29:09.259325 containerd[1624]: time="2026-01-28T01:29:09.259176265Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:29:09.278777 containerd[1624]: time="2026-01-28T01:29:09.278604063Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:29:09.278777 containerd[1624]: time="2026-01-28T01:29:09.278738413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:29:09.281421 kubelet[2995]: E0128 01:29:09.281343 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:29:09.281421 kubelet[2995]: E0128 01:29:09.281403 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:29:09.282764 kubelet[2995]: E0128 01:29:09.282609 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prlsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59889c77b-c5nrl_calico-apiserver(b1cfce8a-501a-4088-a990-12172f5320b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:29:09.285895 kubelet[2995]: E0128 01:29:09.285861 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:29:09.942536 sshd[6336]: Connection closed by 10.0.0.1 port 47366 Jan 28 01:29:09.944451 sshd-session[6330]: pam_unix(sshd:session): session closed for user core Jan 28 01:29:09.957000 audit[6330]: USER_END pid=6330 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:10.052330 kernel: audit: type=1106 audit(1769563749.957:798): pid=6330 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:10.050759 systemd-logind[1590]: Session 11 logged out. Waiting for processes to exit. Jan 28 01:29:09.957000 audit[6330]: CRED_DISP pid=6330 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:10.058503 systemd[1]: sshd@9-10.0.0.61:22-10.0.0.1:47366.service: Deactivated successfully. Jan 28 01:29:10.086197 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 01:29:10.102461 systemd-logind[1590]: Removed session 11. Jan 28 01:29:10.122661 kernel: audit: type=1104 audit(1769563749.957:799): pid=6330 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:10.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.61:22-10.0.0.1:47366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:12.298137 kubelet[2995]: E0128 01:29:12.297854 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:29:12.310808 kubelet[2995]: E0128 01:29:12.308931 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:29:13.073913 kubelet[2995]: E0128 01:29:13.073777 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:29:14.065721 kubelet[2995]: E0128 01:29:14.065338 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:29:14.996509 systemd[1]: Started sshd@10-10.0.0.61:22-10.0.0.1:52834.service - OpenSSH per-connection server daemon (10.0.0.1:52834). Jan 28 01:29:15.030838 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:29:15.031187 kernel: audit: type=1130 audit(1769563754.992:801): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.61:22-10.0.0.1:52834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:14.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.61:22-10.0.0.1:52834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:15.091263 containerd[1624]: time="2026-01-28T01:29:15.090942199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:29:15.207333 containerd[1624]: time="2026-01-28T01:29:15.207266357Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:29:15.219955 containerd[1624]: time="2026-01-28T01:29:15.219851658Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:29:15.219955 containerd[1624]: time="2026-01-28T01:29:15.220141546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 01:29:15.224820 kubelet[2995]: E0128 01:29:15.222817 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:29:15.229487 kubelet[2995]: E0128 01:29:15.226749 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:29:15.235348 kubelet[2995]: E0128 01:29:15.235272 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfmt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d85994946-hw4kg_calico-system(2c2d5c47-2f4c-4dc2-af4d-d250680defb0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:29:15.251264 kubelet[2995]: E0128 01:29:15.249505 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:29:15.282733 sshd[6369]: Accepted publickey for core from 10.0.0.1 port 52834 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:29:15.281000 audit[6369]: USER_ACCT pid=6369 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:15.316239 sshd-session[6369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:29:15.299000 audit[6369]: CRED_ACQ pid=6369 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:15.372438 kernel: audit: type=1101 audit(1769563755.281:802): pid=6369 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:15.372571 kernel: audit: type=1103 audit(1769563755.299:803): pid=6369 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:15.395610 systemd-logind[1590]: New session 12 of user core. Jan 28 01:29:15.299000 audit[6369]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefc189810 a2=3 a3=0 items=0 ppid=1 pid=6369 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:29:15.431827 kernel: audit: type=1006 audit(1769563755.299:804): pid=6369 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 28 01:29:15.433329 kernel: audit: type=1300 audit(1769563755.299:804): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefc189810 a2=3 a3=0 items=0 ppid=1 pid=6369 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:29:15.433394 kernel: audit: type=1327 audit(1769563755.299:804): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:29:15.299000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:29:15.471499 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 01:29:15.496000 audit[6369]: USER_START pid=6369 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:15.557156 kernel: audit: type=1105 audit(1769563755.496:805): pid=6369 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:15.507000 audit[6375]: CRED_ACQ pid=6375 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:15.597220 kernel: audit: type=1103 audit(1769563755.507:806): pid=6375 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:16.059319 sshd[6375]: Connection closed by 10.0.0.1 port 52834 Jan 28 01:29:16.060309 sshd-session[6369]: pam_unix(sshd:session): session closed for user core Jan 28 01:29:16.069000 audit[6369]: USER_END pid=6369 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:16.088802 systemd-logind[1590]: Session 12 logged out. Waiting for processes to exit. Jan 28 01:29:16.090512 systemd[1]: sshd@10-10.0.0.61:22-10.0.0.1:52834.service: Deactivated successfully. Jan 28 01:29:16.096332 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 01:29:16.101182 systemd-logind[1590]: Removed session 12. Jan 28 01:29:16.150287 kernel: audit: type=1106 audit(1769563756.069:807): pid=6369 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:16.071000 audit[6369]: CRED_DISP pid=6369 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:16.204141 kernel: audit: type=1104 audit(1769563756.071:808): pid=6369 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:16.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.61:22-10.0.0.1:52834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:21.070580 kubelet[2995]: E0128 01:29:21.070510 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:29:21.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.61:22-10.0.0.1:52854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:21.108742 systemd[1]: Started sshd@11-10.0.0.61:22-10.0.0.1:52854.service - OpenSSH per-connection server daemon (10.0.0.1:52854). Jan 28 01:29:21.133418 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:29:21.138627 kernel: audit: type=1130 audit(1769563761.111:810): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.61:22-10.0.0.1:52854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:21.565000 audit[6418]: USER_ACCT pid=6418 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:21.592808 sshd[6418]: Accepted publickey for core from 10.0.0.1 port 52854 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:29:21.634151 kernel: audit: type=1101 audit(1769563761.565:811): pid=6418 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:21.631000 audit[6418]: CRED_ACQ pid=6418 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:21.639653 sshd-session[6418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:29:21.699266 kernel: audit: type=1103 audit(1769563761.631:812): pid=6418 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:21.746298 kernel: audit: type=1006 audit(1769563761.631:813): pid=6418 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 28 01:29:21.631000 audit[6418]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe13754e50 a2=3 a3=0 items=0 ppid=1 pid=6418 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:29:21.757474 systemd-logind[1590]: New session 13 of user core. Jan 28 01:29:21.832668 kernel: audit: type=1300 audit(1769563761.631:813): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe13754e50 a2=3 a3=0 items=0 ppid=1 pid=6418 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:29:21.832778 kernel: audit: type=1327 audit(1769563761.631:813): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:29:21.631000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:29:21.852306 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 01:29:21.873000 audit[6418]: USER_START pid=6418 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:21.906000 audit[6422]: CRED_ACQ pid=6422 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:21.927893 kernel: audit: type=1105 audit(1769563761.873:814): pid=6418 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:21.928200 kernel: audit: type=1103 audit(1769563761.906:815): pid=6422 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:23.295367 sshd[6422]: Connection closed by 10.0.0.1 port 52854 Jan 28 01:29:23.302476 sshd-session[6418]: pam_unix(sshd:session): session closed for user core Jan 28 01:29:23.305000 audit[6418]: USER_END pid=6418 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:23.311408 systemd[1]: sshd@11-10.0.0.61:22-10.0.0.1:52854.service: Deactivated successfully. Jan 28 01:29:23.316388 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 01:29:23.363478 systemd-logind[1590]: Session 13 logged out. Waiting for processes to exit. Jan 28 01:29:23.305000 audit[6418]: CRED_DISP pid=6418 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:23.381810 systemd-logind[1590]: Removed session 13. Jan 28 01:29:23.416228 kernel: audit: type=1106 audit(1769563763.305:816): pid=6418 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:23.417464 kernel: audit: type=1104 audit(1769563763.305:817): pid=6418 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:23.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.61:22-10.0.0.1:52854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:24.062846 kubelet[2995]: E0128 01:29:24.062678 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:29:25.091221 kubelet[2995]: E0128 01:29:25.087770 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:29:25.091221 kubelet[2995]: E0128 01:29:25.090268 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:29:26.075555 kubelet[2995]: E0128 01:29:26.074639 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:29:27.079162 kubelet[2995]: E0128 01:29:27.078904 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:29:28.363670 systemd[1]: Started sshd@12-10.0.0.61:22-10.0.0.1:44540.service - OpenSSH per-connection server daemon (10.0.0.1:44540). Jan 28 01:29:28.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.61:22-10.0.0.1:44540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:28.380206 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:29:28.380344 kernel: audit: type=1130 audit(1769563768.356:819): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.61:22-10.0.0.1:44540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:28.735000 audit[6445]: USER_ACCT pid=6445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:28.748468 sshd[6445]: Accepted publickey for core from 10.0.0.1 port 44540 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:29:28.757555 sshd-session[6445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:29:28.803437 systemd-logind[1590]: New session 14 of user core. Jan 28 01:29:28.820406 kernel: audit: type=1101 audit(1769563768.735:820): pid=6445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:28.745000 audit[6445]: CRED_ACQ pid=6445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:28.933595 kernel: audit: type=1103 audit(1769563768.745:821): pid=6445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:28.933755 kernel: audit: type=1006 audit(1769563768.746:822): pid=6445 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 28 01:29:28.944462 kernel: audit: type=1300 audit(1769563768.746:822): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdde626230 a2=3 a3=0 items=0 ppid=1 pid=6445 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:29:28.746000 audit[6445]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdde626230 a2=3 a3=0 items=0 ppid=1 pid=6445 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:29:28.746000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:29:29.076738 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 01:29:29.091901 kernel: audit: type=1327 audit(1769563768.746:822): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:29:29.099000 audit[6445]: USER_START pid=6445 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:29.171141 kernel: audit: type=1105 audit(1769563769.099:823): pid=6445 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:29.171263 kernel: audit: type=1103 audit(1769563769.133:824): pid=6449 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:29.133000 audit[6449]: CRED_ACQ pid=6449 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:29.862169 sshd[6449]: Connection closed by 10.0.0.1 port 44540 Jan 28 01:29:29.863331 sshd-session[6445]: pam_unix(sshd:session): session closed for user core Jan 28 01:29:29.875000 audit[6445]: USER_END pid=6445 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:29.896629 systemd[1]: sshd@12-10.0.0.61:22-10.0.0.1:44540.service: Deactivated successfully. Jan 28 01:29:29.910754 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 01:29:29.875000 audit[6445]: CRED_DISP pid=6445 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:29.928371 systemd-logind[1590]: Session 14 logged out. Waiting for processes to exit. Jan 28 01:29:29.963129 kernel: audit: type=1106 audit(1769563769.875:825): pid=6445 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:29.963205 kernel: audit: type=1104 audit(1769563769.875:826): pid=6445 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:29.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.61:22-10.0.0.1:44540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:29.960725 systemd-logind[1590]: Removed session 14. Jan 28 01:29:30.422160 containerd[1624]: time="2026-01-28T01:29:30.417252258Z" level=info msg="container event discarded" container=40dd44f1f1515f368f13e558b5a4019b9ae5d1e0cd260fed5d6fccc77bea016f type=CONTAINER_CREATED_EVENT Jan 28 01:29:31.225167 containerd[1624]: time="2026-01-28T01:29:31.224938724Z" level=info msg="container event discarded" container=40dd44f1f1515f368f13e558b5a4019b9ae5d1e0cd260fed5d6fccc77bea016f type=CONTAINER_STARTED_EVENT Jan 28 01:29:34.901776 systemd[1]: Started sshd@13-10.0.0.61:22-10.0.0.1:46746.service - OpenSSH per-connection server daemon (10.0.0.1:46746). Jan 28 01:29:34.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.61:22-10.0.0.1:46746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:34.980532 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:29:34.980642 kernel: audit: type=1130 audit(1769563774.909:828): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.61:22-10.0.0.1:46746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:35.332490 sshd[6466]: Accepted publickey for core from 10.0.0.1 port 46746 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:29:35.329000 audit[6466]: USER_ACCT pid=6466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:35.380773 sshd-session[6466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:29:35.413560 systemd-logind[1590]: New session 15 of user core. Jan 28 01:29:35.341000 audit[6466]: CRED_ACQ pid=6466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:35.500428 kernel: audit: type=1101 audit(1769563775.329:829): pid=6466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:35.500587 kernel: audit: type=1103 audit(1769563775.341:830): pid=6466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:35.341000 audit[6466]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0629f840 a2=3 a3=0 items=0 ppid=1 pid=6466 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:29:35.571910 kernel: audit: type=1006 audit(1769563775.341:831): pid=6466 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 28 01:29:35.572783 kernel: audit: type=1300 audit(1769563775.341:831): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0629f840 a2=3 a3=0 items=0 ppid=1 pid=6466 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:29:35.572833 kernel: audit: type=1327 audit(1769563775.341:831): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:29:35.341000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:29:35.599292 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 01:29:35.664000 audit[6466]: USER_START pid=6466 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:35.711960 kernel: audit: type=1105 audit(1769563775.664:832): pid=6466 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:35.709000 audit[6470]: CRED_ACQ pid=6470 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:35.794423 kernel: audit: type=1103 audit(1769563775.709:833): pid=6470 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:36.122511 kubelet[2995]: E0128 01:29:36.119592 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:29:36.122511 kubelet[2995]: E0128 01:29:36.121599 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:29:36.129417 kubelet[2995]: E0128 01:29:36.126876 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:29:36.613756 sshd[6470]: Connection closed by 10.0.0.1 port 46746 Jan 28 01:29:36.616372 sshd-session[6466]: pam_unix(sshd:session): session closed for user core Jan 28 01:29:36.620000 audit[6466]: USER_END pid=6466 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:36.620000 audit[6466]: CRED_DISP pid=6466 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:36.704548 systemd[1]: sshd@13-10.0.0.61:22-10.0.0.1:46746.service: Deactivated successfully. Jan 28 01:29:36.724956 kernel: audit: type=1106 audit(1769563776.620:834): pid=6466 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:36.725181 kernel: audit: type=1104 audit(1769563776.620:835): pid=6466 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:36.714332 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 01:29:36.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.61:22-10.0.0.1:46746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:36.729200 systemd-logind[1590]: Session 15 logged out. Waiting for processes to exit. Jan 28 01:29:36.735250 systemd-logind[1590]: Removed session 15. Jan 28 01:29:39.084315 kubelet[2995]: E0128 01:29:39.084261 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:29:40.067105 kubelet[2995]: E0128 01:29:40.064617 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:29:40.232891 containerd[1624]: time="2026-01-28T01:29:40.232533970Z" level=info msg="container event discarded" container=40dd44f1f1515f368f13e558b5a4019b9ae5d1e0cd260fed5d6fccc77bea016f type=CONTAINER_STOPPED_EVENT Jan 28 01:29:41.090291 kubelet[2995]: E0128 01:29:41.089337 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:29:41.705768 systemd[1]: Started sshd@14-10.0.0.61:22-10.0.0.1:46764.service - OpenSSH per-connection server daemon (10.0.0.1:46764). Jan 28 01:29:41.743447 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:29:41.743626 kernel: audit: type=1130 audit(1769563781.705:837): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.61:22-10.0.0.1:46764 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:41.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.61:22-10.0.0.1:46764 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:42.073371 kubelet[2995]: E0128 01:29:42.064554 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:42.249000 audit[6485]: USER_ACCT pid=6485 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:42.256480 sshd[6485]: Accepted publickey for core from 10.0.0.1 port 46764 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:29:42.265664 sshd-session[6485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:29:42.317493 kernel: audit: type=1101 audit(1769563782.249:838): pid=6485 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:42.317623 kernel: audit: type=1103 audit(1769563782.251:839): pid=6485 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:42.251000 audit[6485]: CRED_ACQ pid=6485 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:42.351808 systemd-logind[1590]: New session 16 of user core. Jan 28 01:29:42.382276 kernel: audit: type=1006 audit(1769563782.251:840): pid=6485 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 28 01:29:42.251000 audit[6485]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb1088d70 a2=3 a3=0 items=0 ppid=1 pid=6485 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:29:42.492096 kernel: audit: type=1300 audit(1769563782.251:840): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb1088d70 a2=3 a3=0 items=0 ppid=1 pid=6485 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:29:42.251000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:29:42.512517 kernel: audit: type=1327 audit(1769563782.251:840): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:29:42.522185 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 01:29:42.552000 audit[6485]: USER_START pid=6485 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:42.555000 audit[6489]: CRED_ACQ pid=6489 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:42.632900 kernel: audit: type=1105 audit(1769563782.552:841): pid=6485 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:42.638513 kernel: audit: type=1103 audit(1769563782.555:842): pid=6489 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:43.183664 sshd[6489]: Connection closed by 10.0.0.1 port 46764 Jan 28 01:29:43.184684 sshd-session[6485]: pam_unix(sshd:session): session closed for user core Jan 28 01:29:43.187000 audit[6485]: USER_END pid=6485 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:43.292572 kernel: audit: type=1106 audit(1769563783.187:843): pid=6485 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:43.292803 kernel: audit: type=1104 audit(1769563783.187:844): pid=6485 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:43.187000 audit[6485]: CRED_DISP pid=6485 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:43.259196 systemd[1]: sshd@14-10.0.0.61:22-10.0.0.1:46764.service: Deactivated successfully. Jan 28 01:29:43.275804 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 01:29:43.293245 systemd-logind[1590]: Session 16 logged out. Waiting for processes to exit. Jan 28 01:29:43.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.61:22-10.0.0.1:46764 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:43.307844 systemd-logind[1590]: Removed session 16. Jan 28 01:29:46.065274 kubelet[2995]: E0128 01:29:46.063555 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:48.277152 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:29:48.277304 kernel: audit: type=1130 audit(1769563788.265:846): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.61:22-10.0.0.1:50194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:48.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.61:22-10.0.0.1:50194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:48.267208 systemd[1]: Started sshd@15-10.0.0.61:22-10.0.0.1:50194.service - OpenSSH per-connection server daemon (10.0.0.1:50194). Jan 28 01:29:48.618000 audit[6506]: USER_ACCT pid=6506 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:48.625424 sshd[6506]: Accepted publickey for core from 10.0.0.1 port 50194 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:29:48.631954 sshd-session[6506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:29:48.627000 audit[6506]: CRED_ACQ pid=6506 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:48.676660 systemd-logind[1590]: New session 17 of user core. Jan 28 01:29:48.702748 kernel: audit: type=1101 audit(1769563788.618:847): pid=6506 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:48.702882 kernel: audit: type=1103 audit(1769563788.627:848): pid=6506 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:48.702906 kernel: audit: type=1006 audit(1769563788.629:849): pid=6506 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 28 01:29:48.629000 audit[6506]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde28484f0 a2=3 a3=0 items=0 ppid=1 pid=6506 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:29:48.750407 kernel: audit: type=1300 audit(1769563788.629:849): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde28484f0 a2=3 a3=0 items=0 ppid=1 pid=6506 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:29:48.755742 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 01:29:48.629000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:29:48.764000 audit[6506]: USER_START pid=6506 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:48.823194 kernel: audit: type=1327 audit(1769563788.629:849): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:29:48.823334 kernel: audit: type=1105 audit(1769563788.764:850): pid=6506 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:48.823382 kernel: audit: type=1103 audit(1769563788.777:851): pid=6510 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:48.777000 audit[6510]: CRED_ACQ pid=6510 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:49.186549 sshd[6510]: Connection closed by 10.0.0.1 port 50194 Jan 28 01:29:49.190477 sshd-session[6506]: pam_unix(sshd:session): session closed for user core Jan 28 01:29:49.198000 audit[6506]: USER_END pid=6506 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:49.205718 systemd[1]: sshd@15-10.0.0.61:22-10.0.0.1:50194.service: Deactivated successfully. Jan 28 01:29:49.223333 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 01:29:49.230504 systemd-logind[1590]: Session 17 logged out. Waiting for processes to exit. Jan 28 01:29:49.198000 audit[6506]: CRED_DISP pid=6506 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:49.250720 systemd-logind[1590]: Removed session 17. Jan 28 01:29:49.259945 kernel: audit: type=1106 audit(1769563789.198:852): pid=6506 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:49.260213 kernel: audit: type=1104 audit(1769563789.198:853): pid=6506 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:49.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.61:22-10.0.0.1:50194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:50.065198 kubelet[2995]: E0128 01:29:50.064875 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:29:50.069664 kubelet[2995]: E0128 01:29:50.067688 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:29:51.120460 kubelet[2995]: E0128 01:29:51.115560 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:29:51.137383 kubelet[2995]: E0128 01:29:51.132480 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:29:51.165660 kubelet[2995]: E0128 01:29:51.148820 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:29:52.076734 kubelet[2995]: E0128 01:29:52.070743 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:29:54.305165 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:29:54.305563 kernel: audit: type=1130 audit(1769563794.263:855): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.61:22-10.0.0.1:41136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:54.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.61:22-10.0.0.1:41136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:29:54.264656 systemd[1]: Started sshd@16-10.0.0.61:22-10.0.0.1:41136.service - OpenSSH per-connection server daemon (10.0.0.1:41136). Jan 28 01:29:54.646133 sshd[6551]: Accepted publickey for core from 10.0.0.1 port 41136 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:29:54.642000 audit[6551]: USER_ACCT pid=6551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:54.702354 sshd-session[6551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:29:54.681000 audit[6551]: CRED_ACQ pid=6551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:54.751434 systemd-logind[1590]: New session 18 of user core. Jan 28 01:29:54.768330 kernel: audit: type=1101 audit(1769563794.642:856): pid=6551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:54.768395 kernel: audit: type=1103 audit(1769563794.681:857): pid=6551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:54.768420 kernel: audit: type=1006 audit(1769563794.681:858): pid=6551 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 28 01:29:54.775402 kernel: audit: type=1300 audit(1769563794.681:858): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2c6fc7e0 a2=3 a3=0 items=0 ppid=1 pid=6551 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:29:54.681000 audit[6551]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2c6fc7e0 a2=3 a3=0 items=0 ppid=1 pid=6551 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:29:54.681000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:29:54.814416 kernel: audit: type=1327 audit(1769563794.681:858): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:29:54.826610 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 01:29:54.855000 audit[6551]: USER_START pid=6551 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:54.901167 kernel: audit: type=1105 audit(1769563794.855:859): pid=6551 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:54.865000 audit[6555]: CRED_ACQ pid=6555 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:54.984180 kernel: audit: type=1103 audit(1769563794.865:860): pid=6555 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:55.581536 sshd[6555]: Connection closed by 10.0.0.1 port 41136 Jan 28 01:29:55.581338 sshd-session[6551]: pam_unix(sshd:session): session closed for user core Jan 28 01:29:55.617454 kernel: audit: type=1106 audit(1769563795.583:861): pid=6551 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:55.583000 audit[6551]: USER_END pid=6551 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:55.621336 systemd[1]: sshd@16-10.0.0.61:22-10.0.0.1:41136.service: Deactivated successfully. Jan 28 01:29:55.636435 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 01:29:55.583000 audit[6551]: CRED_DISP pid=6551 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:55.653290 systemd-logind[1590]: Session 18 logged out. Waiting for processes to exit. Jan 28 01:29:55.657176 systemd-logind[1590]: Removed session 18. Jan 28 01:29:55.697228 kernel: audit: type=1104 audit(1769563795.583:862): pid=6551 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:29:55.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.61:22-10.0.0.1:41136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:00.653592 systemd[1]: Started sshd@17-10.0.0.61:22-10.0.0.1:41158.service - OpenSSH per-connection server daemon (10.0.0.1:41158). Jan 28 01:30:00.662331 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:30:00.662394 kernel: audit: type=1130 audit(1769563800.652:864): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.61:22-10.0.0.1:41158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:00.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.61:22-10.0.0.1:41158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:00.961000 audit[6572]: USER_ACCT pid=6572 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:00.968826 sshd[6572]: Accepted publickey for core from 10.0.0.1 port 41158 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:30:00.977493 sshd-session[6572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:30:01.015229 kernel: audit: type=1101 audit(1769563800.961:865): pid=6572 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:00.974000 audit[6572]: CRED_ACQ pid=6572 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:01.083856 systemd-logind[1590]: New session 19 of user core. Jan 28 01:30:01.094794 kernel: audit: type=1103 audit(1769563800.974:866): pid=6572 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:01.094864 kernel: audit: type=1006 audit(1769563800.974:867): pid=6572 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 28 01:30:01.094908 kernel: audit: type=1300 audit(1769563800.974:867): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe03a51c30 a2=3 a3=0 items=0 ppid=1 pid=6572 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:30:00.974000 audit[6572]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe03a51c30 a2=3 a3=0 items=0 ppid=1 pid=6572 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:30:01.098874 kubelet[2995]: E0128 01:30:01.093943 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:01.132771 kubelet[2995]: E0128 01:30:01.132223 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:30:00.974000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:30:01.177679 kernel: audit: type=1327 audit(1769563800.974:867): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:30:01.183238 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 01:30:01.208000 audit[6572]: USER_START pid=6572 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:01.344513 kernel: audit: type=1105 audit(1769563801.208:868): pid=6572 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:01.358176 kernel: audit: type=1103 audit(1769563801.218:869): pid=6576 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:01.218000 audit[6576]: CRED_ACQ pid=6576 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:01.741326 sshd[6576]: Connection closed by 10.0.0.1 port 41158 Jan 28 01:30:01.744633 sshd-session[6572]: pam_unix(sshd:session): session closed for user core Jan 28 01:30:01.747000 audit[6572]: USER_END pid=6572 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:01.775926 systemd[1]: sshd@17-10.0.0.61:22-10.0.0.1:41158.service: Deactivated successfully. Jan 28 01:30:01.798945 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 01:30:01.809297 systemd-logind[1590]: Session 19 logged out. Waiting for processes to exit. Jan 28 01:30:01.827301 kernel: audit: type=1106 audit(1769563801.747:870): pid=6572 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:01.827434 kernel: audit: type=1104 audit(1769563801.747:871): pid=6572 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:01.747000 audit[6572]: CRED_DISP pid=6572 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:01.829852 systemd-logind[1590]: Removed session 19. Jan 28 01:30:01.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.61:22-10.0.0.1:41158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:02.067478 kubelet[2995]: E0128 01:30:02.062843 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:30:02.067478 kubelet[2995]: E0128 01:30:02.063527 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:30:02.071216 kubelet[2995]: E0128 01:30:02.070630 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:30:03.086105 kubelet[2995]: E0128 01:30:03.085205 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:30:04.079616 kubelet[2995]: E0128 01:30:04.060398 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:04.095778 kubelet[2995]: E0128 01:30:04.095679 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:30:06.813712 systemd[1]: Started sshd@18-10.0.0.61:22-10.0.0.1:37202.service - OpenSSH per-connection server daemon (10.0.0.1:37202). Jan 28 01:30:06.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.61:22-10.0.0.1:37202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:06.826248 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:30:06.826743 kernel: audit: type=1130 audit(1769563806.811:873): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.61:22-10.0.0.1:37202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:07.043000 audit[6591]: USER_ACCT pid=6591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:07.081738 kernel: audit: type=1101 audit(1769563807.043:874): pid=6591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:07.081845 sshd[6591]: Accepted publickey for core from 10.0.0.1 port 37202 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:30:07.053511 sshd-session[6591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:30:07.079204 systemd-logind[1590]: New session 20 of user core. Jan 28 01:30:07.047000 audit[6591]: CRED_ACQ pid=6591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:07.140750 kernel: audit: type=1103 audit(1769563807.047:875): pid=6591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:07.140867 kernel: audit: type=1006 audit(1769563807.049:876): pid=6591 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 28 01:30:07.128340 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 01:30:07.175182 kernel: audit: type=1300 audit(1769563807.049:876): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe080c5240 a2=3 a3=0 items=0 ppid=1 pid=6591 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:30:07.049000 audit[6591]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe080c5240 a2=3 a3=0 items=0 ppid=1 pid=6591 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:30:07.228487 kernel: audit: type=1327 audit(1769563807.049:876): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:30:07.049000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:30:07.142000 audit[6591]: USER_START pid=6591 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:07.334210 kernel: audit: type=1105 audit(1769563807.142:877): pid=6591 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:07.334380 kernel: audit: type=1103 audit(1769563807.169:878): pid=6595 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:07.169000 audit[6595]: CRED_ACQ pid=6595 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:07.941901 sshd[6595]: Connection closed by 10.0.0.1 port 37202 Jan 28 01:30:07.944675 sshd-session[6591]: pam_unix(sshd:session): session closed for user core Jan 28 01:30:07.942000 audit[6591]: USER_END pid=6591 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:07.982246 systemd[1]: sshd@18-10.0.0.61:22-10.0.0.1:37202.service: Deactivated successfully. Jan 28 01:30:07.986431 systemd-logind[1590]: Session 20 logged out. Waiting for processes to exit. Jan 28 01:30:08.046650 kernel: audit: type=1106 audit(1769563807.942:879): pid=6591 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:08.050717 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 01:30:07.947000 audit[6591]: CRED_DISP pid=6591 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:08.090108 systemd-logind[1590]: Removed session 20. Jan 28 01:30:08.102768 kernel: audit: type=1104 audit(1769563807.947:880): pid=6591 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:07.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.61:22-10.0.0.1:37202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:12.069497 kubelet[2995]: E0128 01:30:12.069441 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:13.021685 systemd[1]: Started sshd@19-10.0.0.61:22-10.0.0.1:60350.service - OpenSSH per-connection server daemon (10.0.0.1:60350). Jan 28 01:30:13.042683 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:30:13.042785 kernel: audit: type=1130 audit(1769563813.020:882): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.61:22-10.0.0.1:60350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:13.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.61:22-10.0.0.1:60350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:13.080488 kubelet[2995]: E0128 01:30:13.080220 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:13.088109 kubelet[2995]: E0128 01:30:13.086369 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:30:13.089493 kubelet[2995]: E0128 01:30:13.086772 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:30:13.555000 audit[6610]: USER_ACCT pid=6610 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:13.567550 sshd[6610]: Accepted publickey for core from 10.0.0.1 port 60350 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:30:13.590181 kernel: audit: type=1101 audit(1769563813.555:883): pid=6610 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:13.594000 audit[6610]: CRED_ACQ pid=6610 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:13.616472 kernel: audit: type=1103 audit(1769563813.594:884): pid=6610 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:13.617318 sshd-session[6610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:30:13.595000 audit[6610]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb33432b0 a2=3 a3=0 items=0 ppid=1 pid=6610 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:30:13.666818 systemd-logind[1590]: New session 21 of user core. Jan 28 01:30:13.683299 kernel: audit: type=1006 audit(1769563813.595:885): pid=6610 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 28 01:30:13.683426 kernel: audit: type=1300 audit(1769563813.595:885): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb33432b0 a2=3 a3=0 items=0 ppid=1 pid=6610 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:30:13.684535 kernel: audit: type=1327 audit(1769563813.595:885): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:30:13.595000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:30:13.705909 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 01:30:13.740000 audit[6610]: USER_START pid=6610 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:13.790310 kernel: audit: type=1105 audit(1769563813.740:886): pid=6610 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:13.758000 audit[6614]: CRED_ACQ pid=6614 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:13.817638 kernel: audit: type=1103 audit(1769563813.758:887): pid=6614 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:14.042369 sshd[6614]: Connection closed by 10.0.0.1 port 60350 Jan 28 01:30:14.043118 sshd-session[6610]: pam_unix(sshd:session): session closed for user core Jan 28 01:30:14.049000 audit[6610]: USER_END pid=6610 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:14.065674 kubelet[2995]: E0128 01:30:14.064338 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:30:14.080393 systemd[1]: sshd@19-10.0.0.61:22-10.0.0.1:60350.service: Deactivated successfully. Jan 28 01:30:14.080527 systemd-logind[1590]: Session 21 logged out. Waiting for processes to exit. Jan 28 01:30:14.085335 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 01:30:14.095083 systemd-logind[1590]: Removed session 21. Jan 28 01:30:14.107793 kernel: audit: type=1106 audit(1769563814.049:888): pid=6610 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:14.049000 audit[6610]: CRED_DISP pid=6610 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:14.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.61:22-10.0.0.1:60350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:14.171318 kernel: audit: type=1104 audit(1769563814.049:889): pid=6610 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:15.083866 kubelet[2995]: E0128 01:30:15.083636 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:30:17.068177 kubelet[2995]: E0128 01:30:17.065489 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:17.076339 kubelet[2995]: E0128 01:30:17.073869 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:30:17.076339 kubelet[2995]: E0128 01:30:17.074199 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:30:19.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.61:22-10.0.0.1:60368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:19.080126 systemd[1]: Started sshd@20-10.0.0.61:22-10.0.0.1:60368.service - OpenSSH per-connection server daemon (10.0.0.1:60368). Jan 28 01:30:19.121544 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:30:19.121647 kernel: audit: type=1130 audit(1769563819.079:891): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.61:22-10.0.0.1:60368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:19.341000 audit[6629]: USER_ACCT pid=6629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:19.351460 sshd[6629]: Accepted publickey for core from 10.0.0.1 port 60368 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:30:19.353760 sshd-session[6629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:30:19.366270 kernel: audit: type=1101 audit(1769563819.341:892): pid=6629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:19.348000 audit[6629]: CRED_ACQ pid=6629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:19.376363 systemd-logind[1590]: New session 22 of user core. Jan 28 01:30:19.396109 kernel: audit: type=1103 audit(1769563819.348:893): pid=6629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:19.349000 audit[6629]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebfa56920 a2=3 a3=0 items=0 ppid=1 pid=6629 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:30:19.414533 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 01:30:19.439667 kernel: audit: type=1006 audit(1769563819.349:894): pid=6629 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 28 01:30:19.439788 kernel: audit: type=1300 audit(1769563819.349:894): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebfa56920 a2=3 a3=0 items=0 ppid=1 pid=6629 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:30:19.349000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:30:19.450259 kernel: audit: type=1327 audit(1769563819.349:894): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:30:19.449000 audit[6629]: USER_START pid=6629 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:19.461000 audit[6633]: CRED_ACQ pid=6633 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:19.500265 kernel: audit: type=1105 audit(1769563819.449:895): pid=6629 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:19.500499 kernel: audit: type=1103 audit(1769563819.461:896): pid=6633 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:20.135925 sshd[6633]: Connection closed by 10.0.0.1 port 60368 Jan 28 01:30:20.144804 sshd-session[6629]: pam_unix(sshd:session): session closed for user core Jan 28 01:30:20.159000 audit[6629]: USER_END pid=6629 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:20.231401 kernel: audit: type=1106 audit(1769563820.159:897): pid=6629 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:20.241284 systemd[1]: sshd@20-10.0.0.61:22-10.0.0.1:60368.service: Deactivated successfully. Jan 28 01:30:20.159000 audit[6629]: CRED_DISP pid=6629 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:20.250220 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 01:30:20.261271 systemd-logind[1590]: Session 22 logged out. Waiting for processes to exit. Jan 28 01:30:20.277257 kernel: audit: type=1104 audit(1769563820.159:898): pid=6629 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:20.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.61:22-10.0.0.1:60368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:20.279197 systemd-logind[1590]: Removed session 22. Jan 28 01:30:24.066180 kubelet[2995]: E0128 01:30:24.065855 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:30:25.233564 systemd[1]: Started sshd@21-10.0.0.61:22-10.0.0.1:48698.service - OpenSSH per-connection server daemon (10.0.0.1:48698). Jan 28 01:30:25.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.61:22-10.0.0.1:48698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:25.256532 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:30:25.256667 kernel: audit: type=1130 audit(1769563825.232:900): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.61:22-10.0.0.1:48698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:25.549000 audit[6675]: USER_ACCT pid=6675 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:25.614463 sshd[6675]: Accepted publickey for core from 10.0.0.1 port 48698 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:30:25.618370 kernel: audit: type=1101 audit(1769563825.549:901): pid=6675 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:25.620000 audit[6675]: CRED_ACQ pid=6675 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:25.637324 sshd-session[6675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:30:25.699121 kernel: audit: type=1103 audit(1769563825.620:902): pid=6675 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:25.699242 kernel: audit: type=1006 audit(1769563825.620:903): pid=6675 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 28 01:30:25.699284 kernel: audit: type=1300 audit(1769563825.620:903): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe41b9dcc0 a2=3 a3=0 items=0 ppid=1 pid=6675 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:30:25.620000 audit[6675]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe41b9dcc0 a2=3 a3=0 items=0 ppid=1 pid=6675 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:30:25.708736 systemd-logind[1590]: New session 23 of user core. Jan 28 01:30:25.620000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:30:25.786729 kernel: audit: type=1327 audit(1769563825.620:903): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:30:25.796297 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 01:30:25.821000 audit[6675]: USER_START pid=6675 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:25.886634 kernel: audit: type=1105 audit(1769563825.821:904): pid=6675 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:25.886854 kernel: audit: type=1103 audit(1769563825.842:905): pid=6679 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:25.842000 audit[6679]: CRED_ACQ pid=6679 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:26.085546 kubelet[2995]: E0128 01:30:26.085480 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:30:26.514346 sshd[6679]: Connection closed by 10.0.0.1 port 48698 Jan 28 01:30:26.514418 sshd-session[6675]: pam_unix(sshd:session): session closed for user core Jan 28 01:30:26.529000 audit[6675]: USER_END pid=6675 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:26.559846 systemd[1]: sshd@21-10.0.0.61:22-10.0.0.1:48698.service: Deactivated successfully. Jan 28 01:30:26.575225 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 01:30:26.631169 kernel: audit: type=1106 audit(1769563826.529:906): pid=6675 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:26.631301 kernel: audit: type=1104 audit(1769563826.529:907): pid=6675 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:26.529000 audit[6675]: CRED_DISP pid=6675 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:26.629866 systemd-logind[1590]: Session 23 logged out. Waiting for processes to exit. Jan 28 01:30:26.642765 systemd-logind[1590]: Removed session 23. Jan 28 01:30:26.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.61:22-10.0.0.1:48698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:27.069376 kubelet[2995]: E0128 01:30:27.068648 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:27.102128 kubelet[2995]: E0128 01:30:27.097649 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:30:29.071602 kubelet[2995]: E0128 01:30:29.069463 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:30:29.088352 kubelet[2995]: E0128 01:30:29.086234 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:30:30.164252 kubelet[2995]: E0128 01:30:30.163116 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:30:31.602612 systemd[1]: Started sshd@22-10.0.0.61:22-10.0.0.1:48712.service - OpenSSH per-connection server daemon (10.0.0.1:48712). Jan 28 01:30:31.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.61:22-10.0.0.1:48712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:31.630320 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:30:31.630455 kernel: audit: type=1130 audit(1769563831.599:909): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.61:22-10.0.0.1:48712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:32.119201 sshd[6695]: Accepted publickey for core from 10.0.0.1 port 48712 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:30:32.116000 audit[6695]: USER_ACCT pid=6695 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:32.122368 sshd-session[6695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:30:32.193759 systemd-logind[1590]: New session 24 of user core. Jan 28 01:30:32.117000 audit[6695]: CRED_ACQ pid=6695 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:32.269297 kernel: audit: type=1101 audit(1769563832.116:910): pid=6695 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:32.269439 kernel: audit: type=1103 audit(1769563832.117:911): pid=6695 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:32.269467 kernel: audit: type=1006 audit(1769563832.117:912): pid=6695 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 28 01:30:32.117000 audit[6695]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe80814660 a2=3 a3=0 items=0 ppid=1 pid=6695 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:30:32.317191 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 01:30:32.480556 kernel: audit: type=1300 audit(1769563832.117:912): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe80814660 a2=3 a3=0 items=0 ppid=1 pid=6695 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:30:32.480712 kernel: audit: type=1327 audit(1769563832.117:912): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:30:32.117000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:30:32.400000 audit[6695]: USER_START pid=6695 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:32.664160 kernel: audit: type=1105 audit(1769563832.400:913): pid=6695 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:32.664382 kernel: audit: type=1103 audit(1769563832.428:914): pid=6699 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:32.428000 audit[6699]: CRED_ACQ pid=6699 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:33.257390 sshd[6699]: Connection closed by 10.0.0.1 port 48712 Jan 28 01:30:33.260295 sshd-session[6695]: pam_unix(sshd:session): session closed for user core Jan 28 01:30:33.274000 audit[6695]: USER_END pid=6695 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:33.293702 systemd[1]: sshd@22-10.0.0.61:22-10.0.0.1:48712.service: Deactivated successfully. Jan 28 01:30:33.307430 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 01:30:33.319205 systemd-logind[1590]: Session 24 logged out. Waiting for processes to exit. Jan 28 01:30:33.329252 systemd-logind[1590]: Removed session 24. Jan 28 01:30:33.274000 audit[6695]: CRED_DISP pid=6695 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:33.435324 kernel: audit: type=1106 audit(1769563833.274:915): pid=6695 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:33.435479 kernel: audit: type=1104 audit(1769563833.274:916): pid=6695 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:33.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.61:22-10.0.0.1:48712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:35.084256 kubelet[2995]: E0128 01:30:35.081562 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:30:36.704093 containerd[1624]: time="2026-01-28T01:30:36.695089997Z" level=info msg="container event discarded" container=037e8856f9539f3882089780f9542d72a06b2ca94ea942068f46fb690501af79 type=CONTAINER_CREATED_EVENT Jan 28 01:30:37.803897 containerd[1624]: time="2026-01-28T01:30:37.803422931Z" level=info msg="container event discarded" container=037e8856f9539f3882089780f9542d72a06b2ca94ea942068f46fb690501af79 type=CONTAINER_STARTED_EVENT Jan 28 01:30:38.310472 systemd[1]: Started sshd@23-10.0.0.61:22-10.0.0.1:58064.service - OpenSSH per-connection server daemon (10.0.0.1:58064). Jan 28 01:30:38.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.61:22-10.0.0.1:58064 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:38.385654 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:30:38.385770 kernel: audit: type=1130 audit(1769563838.309:918): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.61:22-10.0.0.1:58064 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:38.689000 audit[6714]: USER_ACCT pid=6714 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:38.701413 sshd-session[6714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:30:38.718140 sshd[6714]: Accepted publickey for core from 10.0.0.1 port 58064 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:30:38.763331 systemd-logind[1590]: New session 25 of user core. Jan 28 01:30:38.799578 kernel: audit: type=1101 audit(1769563838.689:919): pid=6714 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:38.799692 kernel: audit: type=1103 audit(1769563838.696:920): pid=6714 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:38.696000 audit[6714]: CRED_ACQ pid=6714 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:38.894538 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 01:30:38.969583 kernel: audit: type=1006 audit(1769563838.696:921): pid=6714 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 28 01:30:38.969699 kernel: audit: type=1300 audit(1769563838.696:921): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbf2b98d0 a2=3 a3=0 items=0 ppid=1 pid=6714 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:30:38.696000 audit[6714]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbf2b98d0 a2=3 a3=0 items=0 ppid=1 pid=6714 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:30:38.696000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:30:39.114575 kubelet[2995]: E0128 01:30:39.114277 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:30:39.121740 kernel: audit: type=1327 audit(1769563838.696:921): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:30:39.121815 kernel: audit: type=1105 audit(1769563838.926:922): pid=6714 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:38.926000 audit[6714]: USER_START pid=6714 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:38.964000 audit[6718]: CRED_ACQ pid=6718 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:39.326580 kernel: audit: type=1103 audit(1769563838.964:923): pid=6718 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:40.382287 sshd[6718]: Connection closed by 10.0.0.1 port 58064 Jan 28 01:30:40.374319 sshd-session[6714]: pam_unix(sshd:session): session closed for user core Jan 28 01:30:40.476000 audit[6714]: USER_END pid=6714 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:40.618718 systemd[1]: sshd@23-10.0.0.61:22-10.0.0.1:58064.service: Deactivated successfully. Jan 28 01:30:40.653440 kernel: audit: type=1106 audit(1769563840.476:924): pid=6714 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:40.653895 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 01:30:40.476000 audit[6714]: CRED_DISP pid=6714 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:40.670715 systemd-logind[1590]: Session 25 logged out. Waiting for processes to exit. Jan 28 01:30:40.673802 systemd-logind[1590]: Removed session 25. Jan 28 01:30:40.719734 kernel: audit: type=1104 audit(1769563840.476:925): pid=6714 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:40.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.61:22-10.0.0.1:58064 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:42.089577 kubelet[2995]: E0128 01:30:42.087363 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:30:42.089577 kubelet[2995]: E0128 01:30:42.088131 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:30:44.067719 kubelet[2995]: E0128 01:30:44.067602 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:30:44.098166 kubelet[2995]: E0128 01:30:44.094270 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:30:45.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.61:22-10.0.0.1:47260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:45.473847 systemd[1]: Started sshd@24-10.0.0.61:22-10.0.0.1:47260.service - OpenSSH per-connection server daemon (10.0.0.1:47260). Jan 28 01:30:45.492813 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:30:45.493664 kernel: audit: type=1130 audit(1769563845.472:927): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.61:22-10.0.0.1:47260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:45.960000 audit[6733]: USER_ACCT pid=6733 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:46.053683 kernel: audit: type=1101 audit(1769563845.960:928): pid=6733 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:46.053777 sshd[6733]: Accepted publickey for core from 10.0.0.1 port 47260 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:30:46.077000 audit[6733]: CRED_ACQ pid=6733 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:46.103836 sshd-session[6733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:30:46.135483 kernel: audit: type=1103 audit(1769563846.077:929): pid=6733 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:46.135631 kernel: audit: type=1006 audit(1769563846.082:930): pid=6733 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 28 01:30:46.160559 kernel: audit: type=1300 audit(1769563846.082:930): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea70c3b30 a2=3 a3=0 items=0 ppid=1 pid=6733 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:30:46.082000 audit[6733]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea70c3b30 a2=3 a3=0 items=0 ppid=1 pid=6733 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:30:46.157180 systemd-logind[1590]: New session 26 of user core. Jan 28 01:30:46.082000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:30:46.177104 kernel: audit: type=1327 audit(1769563846.082:930): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:30:46.177580 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 01:30:46.203000 audit[6733]: USER_START pid=6733 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:46.262170 kernel: audit: type=1105 audit(1769563846.203:931): pid=6733 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:46.219000 audit[6737]: CRED_ACQ pid=6737 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:46.304364 kernel: audit: type=1103 audit(1769563846.219:932): pid=6737 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:46.761384 sshd[6737]: Connection closed by 10.0.0.1 port 47260 Jan 28 01:30:46.765835 sshd-session[6733]: pam_unix(sshd:session): session closed for user core Jan 28 01:30:46.772000 audit[6733]: USER_END pid=6733 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:46.811913 systemd[1]: sshd@24-10.0.0.61:22-10.0.0.1:47260.service: Deactivated successfully. Jan 28 01:30:46.839663 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 01:30:46.852428 systemd-logind[1590]: Session 26 logged out. Waiting for processes to exit. Jan 28 01:30:46.867732 kernel: audit: type=1106 audit(1769563846.772:933): pid=6733 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:46.866299 systemd-logind[1590]: Removed session 26. Jan 28 01:30:46.772000 audit[6733]: CRED_DISP pid=6733 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:46.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.61:22-10.0.0.1:47260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:46.931397 kernel: audit: type=1104 audit(1769563846.772:934): pid=6733 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:49.113136 kubelet[2995]: E0128 01:30:49.112642 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:30:51.092160 kubelet[2995]: E0128 01:30:51.080462 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:30:51.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.61:22-10.0.0.1:47286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:51.898249 systemd[1]: Started sshd@25-10.0.0.61:22-10.0.0.1:47286.service - OpenSSH per-connection server daemon (10.0.0.1:47286). Jan 28 01:30:51.925381 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:30:51.925519 kernel: audit: type=1130 audit(1769563851.897:936): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.61:22-10.0.0.1:47286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:52.448000 audit[6792]: USER_ACCT pid=6792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:52.454637 sshd[6792]: Accepted publickey for core from 10.0.0.1 port 47286 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:30:52.467675 sshd-session[6792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:30:52.610194 kernel: audit: type=1101 audit(1769563852.448:937): pid=6792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:52.610344 kernel: audit: type=1103 audit(1769563852.457:938): pid=6792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:52.457000 audit[6792]: CRED_ACQ pid=6792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:52.552174 systemd-logind[1590]: New session 27 of user core. Jan 28 01:30:52.457000 audit[6792]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebf477910 a2=3 a3=0 items=0 ppid=1 pid=6792 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:30:52.705706 kernel: audit: type=1006 audit(1769563852.457:939): pid=6792 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 28 01:30:52.705863 kernel: audit: type=1300 audit(1769563852.457:939): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebf477910 a2=3 a3=0 items=0 ppid=1 pid=6792 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:30:52.707479 kernel: audit: type=1327 audit(1769563852.457:939): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:30:52.457000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:30:52.706411 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 28 01:30:52.763000 audit[6792]: USER_START pid=6792 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:52.794000 audit[6796]: CRED_ACQ pid=6796 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:52.945772 kernel: audit: type=1105 audit(1769563852.763:940): pid=6792 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:52.945886 kernel: audit: type=1103 audit(1769563852.794:941): pid=6796 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:53.767000 audit[6792]: USER_END pid=6792 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:53.773365 systemd[1]: sshd@25-10.0.0.61:22-10.0.0.1:47286.service: Deactivated successfully. Jan 28 01:30:53.787906 sshd[6796]: Connection closed by 10.0.0.1 port 47286 Jan 28 01:30:53.758709 sshd-session[6792]: pam_unix(sshd:session): session closed for user core Jan 28 01:30:53.779174 systemd[1]: session-27.scope: Deactivated successfully. Jan 28 01:30:53.786280 systemd-logind[1590]: Session 27 logged out. Waiting for processes to exit. Jan 28 01:30:53.788414 systemd-logind[1590]: Removed session 27. Jan 28 01:30:53.768000 audit[6792]: CRED_DISP pid=6792 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:53.949592 kernel: audit: type=1106 audit(1769563853.767:942): pid=6792 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:53.949813 kernel: audit: type=1104 audit(1769563853.768:943): pid=6792 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:53.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.61:22-10.0.0.1:47286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:54.065388 kubelet[2995]: E0128 01:30:54.064296 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:30:54.520726 containerd[1624]: time="2026-01-28T01:30:54.518192494Z" level=info msg="container event discarded" container=ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70 type=CONTAINER_CREATED_EVENT Jan 28 01:30:54.520726 containerd[1624]: time="2026-01-28T01:30:54.518255932Z" level=info msg="container event discarded" container=ff03320ec5caa56388e7b9a109efff91d9f09592a964ed6132ec94c51cfe1f70 type=CONTAINER_STARTED_EVENT Jan 28 01:30:56.078520 kubelet[2995]: E0128 01:30:56.077361 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:30:57.093365 kubelet[2995]: E0128 01:30:57.091348 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:57.093365 kubelet[2995]: E0128 01:30:57.092219 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:30:58.871425 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:30:58.871603 kernel: audit: type=1130 audit(1769563858.820:945): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.61:22-10.0.0.1:60976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:58.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.61:22-10.0.0.1:60976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:30:58.820768 systemd[1]: Started sshd@26-10.0.0.61:22-10.0.0.1:60976.service - OpenSSH per-connection server daemon (10.0.0.1:60976). Jan 28 01:30:59.103176 kubelet[2995]: E0128 01:30:59.098066 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:30:59.147000 audit[6821]: USER_ACCT pid=6821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:59.156448 sshd[6821]: Accepted publickey for core from 10.0.0.1 port 60976 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:30:59.197405 sshd-session[6821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:30:59.154000 audit[6821]: CRED_ACQ pid=6821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:59.265277 systemd-logind[1590]: New session 28 of user core. Jan 28 01:30:59.290904 kernel: audit: type=1101 audit(1769563859.147:946): pid=6821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:59.291207 kernel: audit: type=1103 audit(1769563859.154:947): pid=6821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:59.291259 kernel: audit: type=1006 audit(1769563859.154:948): pid=6821 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 28 01:30:59.329357 kernel: audit: type=1300 audit(1769563859.154:948): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd899c1330 a2=3 a3=0 items=0 ppid=1 pid=6821 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:30:59.154000 audit[6821]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd899c1330 a2=3 a3=0 items=0 ppid=1 pid=6821 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:30:59.154000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:30:59.418403 kernel: audit: type=1327 audit(1769563859.154:948): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:30:59.427701 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 28 01:30:59.448000 audit[6821]: USER_START pid=6821 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:59.499130 kernel: audit: type=1105 audit(1769563859.448:949): pid=6821 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:59.499285 kernel: audit: type=1103 audit(1769563859.472:950): pid=6825 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:59.472000 audit[6825]: CRED_ACQ pid=6825 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:59.914994 sshd[6825]: Connection closed by 10.0.0.1 port 60976 Jan 28 01:30:59.916279 sshd-session[6821]: pam_unix(sshd:session): session closed for user core Jan 28 01:30:59.920000 audit[6821]: USER_END pid=6821 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:59.954791 systemd-logind[1590]: Session 28 logged out. Waiting for processes to exit. Jan 28 01:30:59.960216 systemd[1]: sshd@26-10.0.0.61:22-10.0.0.1:60976.service: Deactivated successfully. Jan 28 01:30:59.977607 systemd[1]: session-28.scope: Deactivated successfully. Jan 28 01:31:00.011757 kernel: audit: type=1106 audit(1769563859.920:951): pid=6821 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:00.015708 systemd-logind[1590]: Removed session 28. Jan 28 01:30:59.932000 audit[6821]: CRED_DISP pid=6821 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:00.125363 kernel: audit: type=1104 audit(1769563859.932:952): pid=6821 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:30:59.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.61:22-10.0.0.1:60976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:01.084429 kubelet[2995]: E0128 01:31:01.082411 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:31:03.072231 kubelet[2995]: E0128 01:31:03.067312 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:03.157437 kubelet[2995]: E0128 01:31:03.156695 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:31:04.253764 containerd[1624]: time="2026-01-28T01:31:04.253400462Z" level=info msg="container event discarded" container=0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26 type=CONTAINER_CREATED_EVENT Jan 28 01:31:04.253764 containerd[1624]: time="2026-01-28T01:31:04.253524152Z" level=info msg="container event discarded" container=0f719b9fcb9e7f8c8f2fb057b047795925d2c6daf747c7674175978f0b51ad26 type=CONTAINER_STARTED_EVENT Jan 28 01:31:04.918201 containerd[1624]: time="2026-01-28T01:31:04.915644582Z" level=info msg="container event discarded" container=d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3 type=CONTAINER_CREATED_EVENT Jan 28 01:31:04.918201 containerd[1624]: time="2026-01-28T01:31:04.915752483Z" level=info msg="container event discarded" container=d61b83c069d0ca040724fdd2deafbd45175756db965c3b7d10810901c2294ea3 type=CONTAINER_STARTED_EVENT Jan 28 01:31:04.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.61:22-10.0.0.1:46070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:04.992569 systemd[1]: Started sshd@27-10.0.0.61:22-10.0.0.1:46070.service - OpenSSH per-connection server daemon (10.0.0.1:46070). Jan 28 01:31:05.027311 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:31:05.027461 kernel: audit: type=1130 audit(1769563864.991:954): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.61:22-10.0.0.1:46070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:05.027512 containerd[1624]: time="2026-01-28T01:31:05.009579807Z" level=info msg="container event discarded" container=bbbd37ceea0cca7a69e61c68be3bbb761ff155afafe4bd14651d580d9aa40b60 type=CONTAINER_CREATED_EVENT Jan 28 01:31:05.080671 kubelet[2995]: E0128 01:31:05.080631 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:05.084685 kubelet[2995]: E0128 01:31:05.079922 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:31:05.401285 sshd[6841]: Accepted publickey for core from 10.0.0.1 port 46070 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:31:05.396000 audit[6841]: USER_ACCT pid=6841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:05.406471 sshd-session[6841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:05.521148 kernel: audit: type=1101 audit(1769563865.396:955): pid=6841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:05.397000 audit[6841]: CRED_ACQ pid=6841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:05.560582 systemd-logind[1590]: New session 29 of user core. Jan 28 01:31:05.584421 containerd[1624]: time="2026-01-28T01:31:05.572776651Z" level=info msg="container event discarded" container=e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd type=CONTAINER_CREATED_EVENT Jan 28 01:31:05.584421 containerd[1624]: time="2026-01-28T01:31:05.572883610Z" level=info msg="container event discarded" container=e8192bab431bbf4b9c2dd1ddc13f125b56b641c34a345eb905a0b84840a468bd type=CONTAINER_STARTED_EVENT Jan 28 01:31:05.688492 kernel: audit: type=1103 audit(1769563865.397:956): pid=6841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:05.688647 kernel: audit: type=1006 audit(1769563865.397:957): pid=6841 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 28 01:31:05.688692 kernel: audit: type=1300 audit(1769563865.397:957): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeba3d95f0 a2=3 a3=0 items=0 ppid=1 pid=6841 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:05.397000 audit[6841]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeba3d95f0 a2=3 a3=0 items=0 ppid=1 pid=6841 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:05.800381 kernel: audit: type=1327 audit(1769563865.397:957): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:31:05.397000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:31:05.805638 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 28 01:31:05.884000 audit[6841]: USER_START pid=6841 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:05.894000 audit[6845]: CRED_ACQ pid=6845 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:05.996760 kernel: audit: type=1105 audit(1769563865.884:958): pid=6841 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:05.996993 kernel: audit: type=1103 audit(1769563865.894:959): pid=6845 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:06.476478 sshd[6845]: Connection closed by 10.0.0.1 port 46070 Jan 28 01:31:06.475699 sshd-session[6841]: pam_unix(sshd:session): session closed for user core Jan 28 01:31:06.492000 audit[6841]: USER_END pid=6841 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:06.577176 kernel: audit: type=1106 audit(1769563866.492:960): pid=6841 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:06.585862 kernel: audit: type=1104 audit(1769563866.492:961): pid=6841 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:06.492000 audit[6841]: CRED_DISP pid=6841 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:06.588745 systemd[1]: sshd@27-10.0.0.61:22-10.0.0.1:46070.service: Deactivated successfully. Jan 28 01:31:06.589718 systemd-logind[1590]: Session 29 logged out. Waiting for processes to exit. Jan 28 01:31:06.607601 systemd[1]: session-29.scope: Deactivated successfully. Jan 28 01:31:06.631654 systemd-logind[1590]: Removed session 29. Jan 28 01:31:06.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.61:22-10.0.0.1:46070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:06.862334 containerd[1624]: time="2026-01-28T01:31:06.861582776Z" level=info msg="container event discarded" container=bbbd37ceea0cca7a69e61c68be3bbb761ff155afafe4bd14651d580d9aa40b60 type=CONTAINER_STARTED_EVENT Jan 28 01:31:06.945630 containerd[1624]: time="2026-01-28T01:31:06.945466388Z" level=info msg="container event discarded" container=8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc type=CONTAINER_CREATED_EVENT Jan 28 01:31:06.945992 containerd[1624]: time="2026-01-28T01:31:06.945860583Z" level=info msg="container event discarded" container=8771a1fa698fef08c85b0536df0e7b6d297b346eba435f19103e3ada53b7d7cc type=CONTAINER_STARTED_EVENT Jan 28 01:31:06.974911 containerd[1624]: time="2026-01-28T01:31:06.974816034Z" level=info msg="container event discarded" container=c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f type=CONTAINER_CREATED_EVENT Jan 28 01:31:06.976121 containerd[1624]: time="2026-01-28T01:31:06.975417113Z" level=info msg="container event discarded" container=c8ee8131455157ee4096aa4021503ed249c0c4325be0ab57a32a0a5a0cc9ca0f type=CONTAINER_STARTED_EVENT Jan 28 01:31:07.105149 kubelet[2995]: E0128 01:31:07.104868 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:31:07.577465 containerd[1624]: time="2026-01-28T01:31:07.576614963Z" level=info msg="container event discarded" container=b90ef29831aae74d271640a76bc0ba32b6d4e5feabd32ab24fa254f40b3aab40 type=CONTAINER_CREATED_EVENT Jan 28 01:31:08.767796 containerd[1624]: time="2026-01-28T01:31:08.767668364Z" level=info msg="container event discarded" container=f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587 type=CONTAINER_CREATED_EVENT Jan 28 01:31:08.767796 containerd[1624]: time="2026-01-28T01:31:08.767747723Z" level=info msg="container event discarded" container=f35ce63a4120edcdd3ec234c8dbb4c7756999531c3a5cdc9d6eb6995fc141587 type=CONTAINER_STARTED_EVENT Jan 28 01:31:09.032661 containerd[1624]: time="2026-01-28T01:31:09.032461888Z" level=info msg="container event discarded" container=b90ef29831aae74d271640a76bc0ba32b6d4e5feabd32ab24fa254f40b3aab40 type=CONTAINER_STARTED_EVENT Jan 28 01:31:09.096254 kubelet[2995]: E0128 01:31:09.096158 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:31:09.968828 containerd[1624]: time="2026-01-28T01:31:09.968270016Z" level=info msg="container event discarded" container=18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb type=CONTAINER_CREATED_EVENT Jan 28 01:31:09.968828 containerd[1624]: time="2026-01-28T01:31:09.968354033Z" level=info msg="container event discarded" container=18ddfc6fcb958d6afb2889f5b2e12420fa2d3893c6162a2d28b4ad9cecb72ccb type=CONTAINER_STARTED_EVENT Jan 28 01:31:11.548624 systemd[1]: Started sshd@28-10.0.0.61:22-10.0.0.1:46104.service - OpenSSH per-connection server daemon (10.0.0.1:46104). Jan 28 01:31:11.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.61:22-10.0.0.1:46104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:11.582141 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:31:11.582294 kernel: audit: type=1130 audit(1769563871.548:963): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.61:22-10.0.0.1:46104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:12.093912 sshd[6860]: Accepted publickey for core from 10.0.0.1 port 46104 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:31:12.086000 audit[6860]: USER_ACCT pid=6860 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:12.223645 sshd-session[6860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:12.165000 audit[6860]: CRED_ACQ pid=6860 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:12.299823 systemd-logind[1590]: New session 30 of user core. Jan 28 01:31:12.390632 kernel: audit: type=1101 audit(1769563872.086:964): pid=6860 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:12.390797 kernel: audit: type=1103 audit(1769563872.165:965): pid=6860 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:12.411195 kernel: audit: type=1006 audit(1769563872.194:966): pid=6860 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 28 01:31:12.443108 kernel: audit: type=1300 audit(1769563872.194:966): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdcc46e7e0 a2=3 a3=0 items=0 ppid=1 pid=6860 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:12.194000 audit[6860]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdcc46e7e0 a2=3 a3=0 items=0 ppid=1 pid=6860 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:12.553282 kernel: audit: type=1327 audit(1769563872.194:966): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:31:12.194000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:31:12.549468 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 28 01:31:12.615000 audit[6860]: USER_START pid=6860 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:12.675000 audit[6864]: CRED_ACQ pid=6864 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:12.763534 kernel: audit: type=1105 audit(1769563872.615:967): pid=6860 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:12.763596 kernel: audit: type=1103 audit(1769563872.675:968): pid=6864 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:13.106331 kubelet[2995]: E0128 01:31:13.102695 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:31:13.847220 sshd[6864]: Connection closed by 10.0.0.1 port 46104 Jan 28 01:31:13.891559 sshd-session[6860]: pam_unix(sshd:session): session closed for user core Jan 28 01:31:13.940000 audit[6860]: USER_END pid=6860 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:13.989852 systemd[1]: sshd@28-10.0.0.61:22-10.0.0.1:46104.service: Deactivated successfully. Jan 28 01:31:14.027738 systemd[1]: session-30.scope: Deactivated successfully. Jan 28 01:31:14.062641 systemd-logind[1590]: Session 30 logged out. Waiting for processes to exit. Jan 28 01:31:14.166458 kubelet[2995]: E0128 01:31:14.151815 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:31:14.255635 kernel: audit: type=1106 audit(1769563873.940:969): pid=6860 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:14.255717 kernel: audit: type=1104 audit(1769563873.950:970): pid=6860 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:13.950000 audit[6860]: CRED_DISP pid=6860 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:14.295583 systemd-logind[1590]: Removed session 30. Jan 28 01:31:13.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.61:22-10.0.0.1:46104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:16.115657 kubelet[2995]: E0128 01:31:16.111718 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:31:18.122196 kubelet[2995]: E0128 01:31:18.120217 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:31:18.927785 systemd[1]: Started sshd@29-10.0.0.61:22-10.0.0.1:60508.service - OpenSSH per-connection server daemon (10.0.0.1:60508). Jan 28 01:31:19.041407 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:31:19.041592 kernel: audit: type=1130 audit(1769563878.929:972): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.61:22-10.0.0.1:60508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:18.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.61:22-10.0.0.1:60508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:19.447000 audit[6880]: USER_ACCT pid=6880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:19.478718 sshd[6880]: Accepted publickey for core from 10.0.0.1 port 60508 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:31:19.512391 sshd-session[6880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:19.555223 kernel: audit: type=1101 audit(1769563879.447:973): pid=6880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:19.555374 kernel: audit: type=1103 audit(1769563879.481:974): pid=6880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:19.481000 audit[6880]: CRED_ACQ pid=6880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:19.630210 kernel: audit: type=1006 audit(1769563879.493:975): pid=6880 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jan 28 01:31:19.493000 audit[6880]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc09337f90 a2=3 a3=0 items=0 ppid=1 pid=6880 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:19.693466 systemd-logind[1590]: New session 31 of user core. Jan 28 01:31:19.725587 kernel: audit: type=1300 audit(1769563879.493:975): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc09337f90 a2=3 a3=0 items=0 ppid=1 pid=6880 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:19.493000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:31:19.766617 kernel: audit: type=1327 audit(1769563879.493:975): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:31:19.770363 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 28 01:31:19.817000 audit[6880]: USER_START pid=6880 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:20.108273 kernel: audit: type=1105 audit(1769563879.817:976): pid=6880 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:20.108441 kernel: audit: type=1103 audit(1769563879.832:977): pid=6895 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:19.832000 audit[6895]: CRED_ACQ pid=6895 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:20.108607 kubelet[2995]: E0128 01:31:20.106737 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:31:20.151347 kubelet[2995]: E0128 01:31:20.124102 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:31:21.054278 sshd[6895]: Connection closed by 10.0.0.1 port 60508 Jan 28 01:31:21.058633 sshd-session[6880]: pam_unix(sshd:session): session closed for user core Jan 28 01:31:21.169191 kernel: audit: type=1106 audit(1769563881.085:978): pid=6880 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:21.085000 audit[6880]: USER_END pid=6880 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:21.169500 kubelet[2995]: E0128 01:31:21.093834 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:21.085000 audit[6880]: CRED_DISP pid=6880 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:21.192678 systemd[1]: sshd@29-10.0.0.61:22-10.0.0.1:60508.service: Deactivated successfully. Jan 28 01:31:21.214457 kernel: audit: type=1104 audit(1769563881.085:979): pid=6880 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:21.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.61:22-10.0.0.1:60508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:21.218361 systemd[1]: session-31.scope: Deactivated successfully. Jan 28 01:31:21.255559 systemd-logind[1590]: Session 31 logged out. Waiting for processes to exit. Jan 28 01:31:21.268567 systemd-logind[1590]: Removed session 31. Jan 28 01:31:26.160477 systemd[1]: Started sshd@30-10.0.0.61:22-10.0.0.1:56928.service - OpenSSH per-connection server daemon (10.0.0.1:56928). Jan 28 01:31:26.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.61:22-10.0.0.1:56928 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:26.190824 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:31:26.191375 kernel: audit: type=1130 audit(1769563886.159:981): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.61:22-10.0.0.1:56928 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:26.806000 audit[6923]: USER_ACCT pid=6923 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:26.843497 sshd[6923]: Accepted publickey for core from 10.0.0.1 port 56928 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:31:26.849636 sshd-session[6923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:26.930308 kernel: audit: type=1101 audit(1769563886.806:982): pid=6923 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:26.930400 kernel: audit: type=1103 audit(1769563886.836:983): pid=6923 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:26.836000 audit[6923]: CRED_ACQ pid=6923 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:26.945263 systemd-logind[1590]: New session 32 of user core. Jan 28 01:31:26.968249 kernel: audit: type=1006 audit(1769563886.836:984): pid=6923 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jan 28 01:31:27.069180 kernel: audit: type=1300 audit(1769563886.836:984): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef232c5e0 a2=3 a3=0 items=0 ppid=1 pid=6923 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:26.836000 audit[6923]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef232c5e0 a2=3 a3=0 items=0 ppid=1 pid=6923 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:27.081189 kernel: audit: type=1327 audit(1769563886.836:984): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:31:26.836000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:31:27.085416 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 28 01:31:27.122586 kubelet[2995]: E0128 01:31:27.122545 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:27.126886 kubelet[2995]: E0128 01:31:27.126827 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:31:27.129000 audit[6923]: USER_START pid=6923 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:27.212301 kernel: audit: type=1105 audit(1769563887.129:985): pid=6923 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:27.140000 audit[6927]: CRED_ACQ pid=6927 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:27.266209 kernel: audit: type=1103 audit(1769563887.140:986): pid=6927 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:28.082315 sshd[6927]: Connection closed by 10.0.0.1 port 56928 Jan 28 01:31:28.090923 sshd-session[6923]: pam_unix(sshd:session): session closed for user core Jan 28 01:31:28.112000 audit[6923]: USER_END pid=6923 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:28.121358 systemd[1]: sshd@30-10.0.0.61:22-10.0.0.1:56928.service: Deactivated successfully. Jan 28 01:31:28.123336 systemd-logind[1590]: Session 32 logged out. Waiting for processes to exit. Jan 28 01:31:28.153360 systemd[1]: session-32.scope: Deactivated successfully. Jan 28 01:31:28.160616 systemd-logind[1590]: Removed session 32. Jan 28 01:31:28.166066 kernel: audit: type=1106 audit(1769563888.112:987): pid=6923 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:28.112000 audit[6923]: CRED_DISP pid=6923 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:28.175169 kubelet[2995]: E0128 01:31:28.173056 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:31:28.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.61:22-10.0.0.1:56928 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:28.208920 kernel: audit: type=1104 audit(1769563888.112:988): pid=6923 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:30.065677 kubelet[2995]: E0128 01:31:30.065579 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:30.088718 kubelet[2995]: E0128 01:31:30.088480 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:31:31.076857 kubelet[2995]: E0128 01:31:31.074146 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:31:32.072214 kubelet[2995]: E0128 01:31:32.064502 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:32.095271 kubelet[2995]: E0128 01:31:32.094879 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:31:33.143275 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:31:33.143434 kernel: audit: type=1130 audit(1769563893.131:990): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.61:22-10.0.0.1:39524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:33.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.61:22-10.0.0.1:39524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:33.132061 systemd[1]: Started sshd@31-10.0.0.61:22-10.0.0.1:39524.service - OpenSSH per-connection server daemon (10.0.0.1:39524). Jan 28 01:31:33.476000 audit[6943]: USER_ACCT pid=6943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:33.516892 kernel: audit: type=1101 audit(1769563893.476:991): pid=6943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:33.502890 sshd-session[6943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:33.518906 sshd[6943]: Accepted publickey for core from 10.0.0.1 port 39524 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:31:33.484000 audit[6943]: CRED_ACQ pid=6943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:33.530116 systemd-logind[1590]: New session 33 of user core. Jan 28 01:31:33.551215 kernel: audit: type=1103 audit(1769563893.484:992): pid=6943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:33.551375 kernel: audit: type=1006 audit(1769563893.484:993): pid=6943 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=33 res=1 Jan 28 01:31:33.484000 audit[6943]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe74f71910 a2=3 a3=0 items=0 ppid=1 pid=6943 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:33.553699 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 28 01:31:33.572097 kernel: audit: type=1300 audit(1769563893.484:993): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe74f71910 a2=3 a3=0 items=0 ppid=1 pid=6943 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:33.572250 kernel: audit: type=1327 audit(1769563893.484:993): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:31:33.484000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:31:33.569000 audit[6943]: USER_START pid=6943 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:33.574000 audit[6947]: CRED_ACQ pid=6947 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:33.617715 kernel: audit: type=1105 audit(1769563893.569:994): pid=6943 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:33.618197 kernel: audit: type=1103 audit(1769563893.574:995): pid=6947 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:33.945213 sshd[6947]: Connection closed by 10.0.0.1 port 39524 Jan 28 01:31:33.949377 sshd-session[6943]: pam_unix(sshd:session): session closed for user core Jan 28 01:31:33.950000 audit[6943]: USER_END pid=6943 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:33.962210 systemd[1]: sshd@31-10.0.0.61:22-10.0.0.1:39524.service: Deactivated successfully. Jan 28 01:31:33.967607 systemd[1]: session-33.scope: Deactivated successfully. Jan 28 01:31:33.968074 kernel: audit: type=1106 audit(1769563893.950:996): pid=6943 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:33.968904 kernel: audit: type=1104 audit(1769563893.950:997): pid=6943 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:33.950000 audit[6943]: CRED_DISP pid=6943 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:33.974921 systemd-logind[1590]: Session 33 logged out. Waiting for processes to exit. Jan 28 01:31:33.980104 systemd-logind[1590]: Removed session 33. Jan 28 01:31:33.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.61:22-10.0.0.1:39524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:35.094385 kubelet[2995]: E0128 01:31:35.092617 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:31:39.036381 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:31:39.036533 kernel: audit: type=1130 audit(1769563899.016:999): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.61:22-10.0.0.1:39576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:39.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.61:22-10.0.0.1:39576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:39.021478 systemd[1]: Started sshd@32-10.0.0.61:22-10.0.0.1:39576.service - OpenSSH per-connection server daemon (10.0.0.1:39576). Jan 28 01:31:39.319201 sshd[6970]: Accepted publickey for core from 10.0.0.1 port 39576 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:31:39.315000 audit[6970]: USER_ACCT pid=6970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:39.325858 sshd-session[6970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:39.367740 kernel: audit: type=1101 audit(1769563899.315:1000): pid=6970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:39.367876 kernel: audit: type=1103 audit(1769563899.317:1001): pid=6970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:39.317000 audit[6970]: CRED_ACQ pid=6970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:39.366669 systemd-logind[1590]: New session 34 of user core. Jan 28 01:31:39.317000 audit[6970]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca97daab0 a2=3 a3=0 items=0 ppid=1 pid=6970 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:39.411700 kernel: audit: type=1006 audit(1769563899.317:1002): pid=6970 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=34 res=1 Jan 28 01:31:39.411856 kernel: audit: type=1300 audit(1769563899.317:1002): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca97daab0 a2=3 a3=0 items=0 ppid=1 pid=6970 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:39.411919 kernel: audit: type=1327 audit(1769563899.317:1002): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:31:39.317000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:31:39.424561 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 28 01:31:39.434000 audit[6970]: USER_START pid=6970 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:39.440000 audit[6974]: CRED_ACQ pid=6974 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:39.474272 kernel: audit: type=1105 audit(1769563899.434:1003): pid=6970 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:39.474396 kernel: audit: type=1103 audit(1769563899.440:1004): pid=6974 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:39.638061 sshd[6974]: Connection closed by 10.0.0.1 port 39576 Jan 28 01:31:39.644503 sshd-session[6970]: pam_unix(sshd:session): session closed for user core Jan 28 01:31:39.652000 audit[6970]: USER_END pid=6970 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:39.665608 systemd-logind[1590]: Session 34 logged out. Waiting for processes to exit. Jan 28 01:31:39.671192 kernel: audit: type=1106 audit(1769563899.652:1005): pid=6970 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:39.665631 systemd[1]: sshd@32-10.0.0.61:22-10.0.0.1:39576.service: Deactivated successfully. Jan 28 01:31:39.652000 audit[6970]: CRED_DISP pid=6970 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:39.673691 systemd[1]: session-34.scope: Deactivated successfully. Jan 28 01:31:39.682161 kernel: audit: type=1104 audit(1769563899.652:1006): pid=6970 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:39.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.61:22-10.0.0.1:39576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:39.682253 systemd-logind[1590]: Removed session 34. Jan 28 01:31:40.074822 kubelet[2995]: E0128 01:31:40.067240 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:31:42.136085 kubelet[2995]: E0128 01:31:42.129745 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:31:43.086640 containerd[1624]: time="2026-01-28T01:31:43.085745783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:31:43.203884 containerd[1624]: time="2026-01-28T01:31:43.203125025Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:31:43.211317 containerd[1624]: time="2026-01-28T01:31:43.210077147Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:31:43.211317 containerd[1624]: time="2026-01-28T01:31:43.210199726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:31:43.211513 kubelet[2995]: E0128 01:31:43.210412 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:31:43.211513 kubelet[2995]: E0128 01:31:43.210491 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:31:43.211513 kubelet[2995]: E0128 01:31:43.210693 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwzkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59889c77b-9msjb_calico-apiserver(63a937f8-f218-45d1-87c6-b75ad5fcad55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:31:43.212630 kubelet[2995]: E0128 01:31:43.212558 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:31:44.070301 containerd[1624]: time="2026-01-28T01:31:44.070181384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:31:44.164196 containerd[1624]: time="2026-01-28T01:31:44.163804832Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:31:44.170570 containerd[1624]: time="2026-01-28T01:31:44.168544441Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:31:44.170570 containerd[1624]: time="2026-01-28T01:31:44.168741869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 01:31:44.170570 containerd[1624]: time="2026-01-28T01:31:44.170131086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:31:44.170779 kubelet[2995]: E0128 01:31:44.169126 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:31:44.170779 kubelet[2995]: E0128 01:31:44.169193 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:31:44.170779 kubelet[2995]: E0128 01:31:44.169435 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4880c63e305842ec869c2f1042d40e46,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5lwwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b57dd5bd-wgdx5_calico-system(fe814487-9d96-47c2-af16-f4ab9eb63844): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:31:44.447945 containerd[1624]: time="2026-01-28T01:31:44.447420094Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:31:44.459293 containerd[1624]: time="2026-01-28T01:31:44.456392577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:31:44.459293 containerd[1624]: time="2026-01-28T01:31:44.456539671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 01:31:44.459682 kubelet[2995]: E0128 01:31:44.457951 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:31:44.459682 kubelet[2995]: E0128 01:31:44.458171 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:31:44.468485 kubelet[2995]: E0128 01:31:44.461477 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4l6c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kgc2v_calico-system(845c6024-31b8-4f74-be49-c76c18f222f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:31:44.468778 containerd[1624]: time="2026-01-28T01:31:44.460849709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:31:44.669254 containerd[1624]: time="2026-01-28T01:31:44.667144233Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:31:44.684766 containerd[1624]: time="2026-01-28T01:31:44.674321302Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:31:44.684766 containerd[1624]: time="2026-01-28T01:31:44.675613808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 01:31:44.698376 kubelet[2995]: E0128 01:31:44.697826 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:31:44.715190 kubelet[2995]: E0128 01:31:44.706244 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:31:44.715190 kubelet[2995]: E0128 01:31:44.710963 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5lwwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b57dd5bd-wgdx5_calico-system(fe814487-9d96-47c2-af16-f4ab9eb63844): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:31:44.715190 kubelet[2995]: E0128 01:31:44.713112 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:31:44.715824 containerd[1624]: time="2026-01-28T01:31:44.713715463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:31:45.383729 systemd[1]: Started sshd@33-10.0.0.61:22-10.0.0.1:55858.service - OpenSSH per-connection server daemon (10.0.0.1:55858). Jan 28 01:31:45.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.61:22-10.0.0.1:55858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:45.393249 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:31:45.393328 kernel: audit: type=1130 audit(1769563905.382:1008): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.61:22-10.0.0.1:55858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:45.909229 containerd[1624]: time="2026-01-28T01:31:45.902431082Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:31:45.921879 containerd[1624]: time="2026-01-28T01:31:45.918293288Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:31:45.921879 containerd[1624]: time="2026-01-28T01:31:45.918447154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 01:31:45.969134 kubelet[2995]: E0128 01:31:45.936458 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:31:45.969134 kubelet[2995]: E0128 01:31:45.936589 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:31:45.969134 kubelet[2995]: E0128 01:31:45.937686 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4l6c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kgc2v_calico-system(845c6024-31b8-4f74-be49-c76c18f222f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:31:45.969134 kubelet[2995]: E0128 01:31:45.941837 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:31:46.578000 audit[6988]: USER_ACCT pid=6988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:46.602091 sshd[6988]: Accepted publickey for core from 10.0.0.1 port 55858 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:31:46.628177 sshd-session[6988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:46.677303 systemd-logind[1590]: New session 35 of user core. Jan 28 01:31:46.617000 audit[6988]: CRED_ACQ pid=6988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:46.727517 kernel: audit: type=1101 audit(1769563906.578:1009): pid=6988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:46.727697 kernel: audit: type=1103 audit(1769563906.617:1010): pid=6988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:46.731961 kernel: audit: type=1006 audit(1769563906.617:1011): pid=6988 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=35 res=1 Jan 28 01:31:46.617000 audit[6988]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8d64ea50 a2=3 a3=0 items=0 ppid=1 pid=6988 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:46.874125 kernel: audit: type=1300 audit(1769563906.617:1011): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8d64ea50 a2=3 a3=0 items=0 ppid=1 pid=6988 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:46.617000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:31:46.885237 kernel: audit: type=1327 audit(1769563906.617:1011): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:31:46.882850 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 28 01:31:46.912000 audit[6988]: USER_START pid=6988 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:46.930000 audit[6992]: CRED_ACQ pid=6992 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:47.061130 kernel: audit: type=1105 audit(1769563906.912:1012): pid=6988 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:47.061276 kernel: audit: type=1103 audit(1769563906.930:1013): pid=6992 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:47.561376 sshd[6992]: Connection closed by 10.0.0.1 port 55858 Jan 28 01:31:47.564430 sshd-session[6988]: pam_unix(sshd:session): session closed for user core Jan 28 01:31:47.574000 audit[6988]: USER_END pid=6988 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:47.574000 audit[6988]: CRED_DISP pid=6988 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:47.636073 kernel: audit: type=1106 audit(1769563907.574:1014): pid=6988 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:47.636214 kernel: audit: type=1104 audit(1769563907.574:1015): pid=6988 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:47.642485 systemd[1]: sshd@33-10.0.0.61:22-10.0.0.1:55858.service: Deactivated successfully. Jan 28 01:31:47.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.61:22-10.0.0.1:55858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:47.672765 systemd[1]: session-35.scope: Deactivated successfully. Jan 28 01:31:47.688166 systemd-logind[1590]: Session 35 logged out. Waiting for processes to exit. Jan 28 01:31:47.735414 systemd[1]: Started sshd@34-10.0.0.61:22-10.0.0.1:55866.service - OpenSSH per-connection server daemon (10.0.0.1:55866). Jan 28 01:31:47.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.61:22-10.0.0.1:55866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:47.746219 systemd-logind[1590]: Removed session 35. Jan 28 01:31:48.120301 sshd[7006]: Accepted publickey for core from 10.0.0.1 port 55866 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:31:48.119482 sshd-session[7006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:48.113000 audit[7006]: USER_ACCT pid=7006 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:48.116000 audit[7006]: CRED_ACQ pid=7006 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:48.116000 audit[7006]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe690ec740 a2=3 a3=0 items=0 ppid=1 pid=7006 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:48.116000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:31:48.201191 systemd-logind[1590]: New session 36 of user core. Jan 28 01:31:48.218433 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 28 01:31:48.227000 audit[7006]: USER_START pid=7006 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:48.254000 audit[7010]: CRED_ACQ pid=7010 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:48.892109 sshd[7010]: Connection closed by 10.0.0.1 port 55866 Jan 28 01:31:48.891345 sshd-session[7006]: pam_unix(sshd:session): session closed for user core Jan 28 01:31:48.891000 audit[7006]: USER_END pid=7006 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:48.891000 audit[7006]: CRED_DISP pid=7006 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:48.953772 systemd[1]: sshd@34-10.0.0.61:22-10.0.0.1:55866.service: Deactivated successfully. Jan 28 01:31:48.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.61:22-10.0.0.1:55866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:48.982565 systemd[1]: session-36.scope: Deactivated successfully. Jan 28 01:31:48.995501 systemd-logind[1590]: Session 36 logged out. Waiting for processes to exit. Jan 28 01:31:49.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.61:22-10.0.0.1:55894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:49.013747 systemd[1]: Started sshd@35-10.0.0.61:22-10.0.0.1:55894.service - OpenSSH per-connection server daemon (10.0.0.1:55894). Jan 28 01:31:49.029562 systemd-logind[1590]: Removed session 36. Jan 28 01:31:49.076725 kubelet[2995]: E0128 01:31:49.070666 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:31:49.352633 sshd[7021]: Accepted publickey for core from 10.0.0.1 port 55894 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:31:49.345000 audit[7021]: USER_ACCT pid=7021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:49.354000 audit[7021]: CRED_ACQ pid=7021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:49.354000 audit[7021]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd77f41350 a2=3 a3=0 items=0 ppid=1 pid=7021 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:49.354000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:31:49.362857 sshd-session[7021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:49.398874 systemd-logind[1590]: New session 37 of user core. Jan 28 01:31:49.424432 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 28 01:31:49.452000 audit[7021]: USER_START pid=7021 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:49.458000 audit[7025]: CRED_ACQ pid=7025 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:50.561519 sshd[7025]: Connection closed by 10.0.0.1 port 55894 Jan 28 01:31:50.563261 sshd-session[7021]: pam_unix(sshd:session): session closed for user core Jan 28 01:31:50.577000 audit[7021]: USER_END pid=7021 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:50.674733 kernel: kauditd_printk_skb: 20 callbacks suppressed Jan 28 01:31:50.674944 kernel: audit: type=1106 audit(1769563910.577:1032): pid=7021 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:50.580000 audit[7021]: CRED_DISP pid=7021 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:50.690571 systemd-logind[1590]: Session 37 logged out. Waiting for processes to exit. Jan 28 01:31:50.693509 systemd[1]: sshd@35-10.0.0.61:22-10.0.0.1:55894.service: Deactivated successfully. Jan 28 01:31:50.717262 kernel: audit: type=1104 audit(1769563910.580:1033): pid=7021 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:50.700778 systemd[1]: session-37.scope: Deactivated successfully. Jan 28 01:31:50.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.61:22-10.0.0.1:55894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:50.753701 kernel: audit: type=1131 audit(1769563910.694:1034): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.61:22-10.0.0.1:55894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:50.760958 systemd-logind[1590]: Removed session 37. Jan 28 01:31:53.075304 containerd[1624]: time="2026-01-28T01:31:53.075195003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:31:53.164356 containerd[1624]: time="2026-01-28T01:31:53.164240785Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:31:53.169934 containerd[1624]: time="2026-01-28T01:31:53.168462890Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:31:53.169934 containerd[1624]: time="2026-01-28T01:31:53.168587872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 01:31:53.170254 kubelet[2995]: E0128 01:31:53.168812 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:31:53.170254 kubelet[2995]: E0128 01:31:53.168955 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:31:53.170254 kubelet[2995]: E0128 01:31:53.169231 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptffd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-g77nq_calico-system(8c19397c-299f-4305-bb7b-810de8e940fe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:31:53.171246 kubelet[2995]: E0128 01:31:53.171186 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:31:55.067131 kubelet[2995]: E0128 01:31:55.064781 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:55.072128 kubelet[2995]: E0128 01:31:55.069363 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:31:55.621835 systemd[1]: Started sshd@36-10.0.0.61:22-10.0.0.1:57700.service - OpenSSH per-connection server daemon (10.0.0.1:57700). Jan 28 01:31:55.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.61:22-10.0.0.1:57700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:55.684306 kernel: audit: type=1130 audit(1769563915.621:1035): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.61:22-10.0.0.1:57700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:56.079000 audit[7066]: USER_ACCT pid=7066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:56.095678 sshd[7066]: Accepted publickey for core from 10.0.0.1 port 57700 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:31:56.108632 sshd-session[7066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:56.143708 systemd-logind[1590]: New session 38 of user core. Jan 28 01:31:56.092000 audit[7066]: CRED_ACQ pid=7066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:56.181063 kernel: audit: type=1101 audit(1769563916.079:1036): pid=7066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:56.181198 kernel: audit: type=1103 audit(1769563916.092:1037): pid=7066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:56.092000 audit[7066]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4b6c0f10 a2=3 a3=0 items=0 ppid=1 pid=7066 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:56.197619 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 28 01:31:56.225720 kernel: audit: type=1006 audit(1769563916.092:1038): pid=7066 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=38 res=1 Jan 28 01:31:56.225860 kernel: audit: type=1300 audit(1769563916.092:1038): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4b6c0f10 a2=3 a3=0 items=0 ppid=1 pid=7066 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:31:56.092000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:31:56.245095 kernel: audit: type=1327 audit(1769563916.092:1038): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:31:56.230000 audit[7066]: USER_START pid=7066 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:56.249000 audit[7070]: CRED_ACQ pid=7070 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:56.333316 kernel: audit: type=1105 audit(1769563916.230:1039): pid=7066 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:56.333465 kernel: audit: type=1103 audit(1769563916.249:1040): pid=7070 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:56.556243 sshd[7070]: Connection closed by 10.0.0.1 port 57700 Jan 28 01:31:56.555635 sshd-session[7066]: pam_unix(sshd:session): session closed for user core Jan 28 01:31:56.561000 audit[7066]: USER_END pid=7066 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:56.580344 systemd[1]: sshd@36-10.0.0.61:22-10.0.0.1:57700.service: Deactivated successfully. Jan 28 01:31:56.616708 systemd[1]: session-38.scope: Deactivated successfully. Jan 28 01:31:56.617440 systemd-logind[1590]: Session 38 logged out. Waiting for processes to exit. Jan 28 01:31:56.645824 systemd-logind[1590]: Removed session 38. Jan 28 01:31:56.561000 audit[7066]: CRED_DISP pid=7066 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:56.714818 kernel: audit: type=1106 audit(1769563916.561:1041): pid=7066 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:56.715134 kernel: audit: type=1104 audit(1769563916.561:1042): pid=7066 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:31:56.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.61:22-10.0.0.1:57700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:31:57.085759 kubelet[2995]: E0128 01:31:57.085436 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:31:58.080873 kubelet[2995]: E0128 01:31:58.067849 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:31:58.080873 kubelet[2995]: E0128 01:31:58.079748 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:32:01.073082 containerd[1624]: time="2026-01-28T01:32:01.072349965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:32:01.187139 containerd[1624]: time="2026-01-28T01:32:01.175738420Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:32:01.187139 containerd[1624]: time="2026-01-28T01:32:01.180133015Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:32:01.187139 containerd[1624]: time="2026-01-28T01:32:01.180255354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:32:01.187422 kubelet[2995]: E0128 01:32:01.184935 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:32:01.187422 kubelet[2995]: E0128 01:32:01.185478 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:32:01.191685 kubelet[2995]: E0128 01:32:01.190144 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prlsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59889c77b-c5nrl_calico-apiserver(b1cfce8a-501a-4088-a990-12172f5320b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:32:01.199242 kubelet[2995]: E0128 01:32:01.193108 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:32:01.677151 systemd[1]: Started sshd@37-10.0.0.61:22-10.0.0.1:57772.service - OpenSSH per-connection server daemon (10.0.0.1:57772). Jan 28 01:32:01.712867 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:32:01.713203 kernel: audit: type=1130 audit(1769563921.674:1044): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.61:22-10.0.0.1:57772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:01.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.61:22-10.0.0.1:57772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:02.209944 sshd[7085]: Accepted publickey for core from 10.0.0.1 port 57772 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:32:02.199000 audit[7085]: USER_ACCT pid=7085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:02.216699 sshd-session[7085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:32:02.214000 audit[7085]: CRED_ACQ pid=7085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:02.332422 systemd-logind[1590]: New session 39 of user core. Jan 28 01:32:02.397180 kernel: audit: type=1101 audit(1769563922.199:1045): pid=7085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:02.397330 kernel: audit: type=1103 audit(1769563922.214:1046): pid=7085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:02.397359 kernel: audit: type=1006 audit(1769563922.214:1047): pid=7085 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=39 res=1 Jan 28 01:32:02.427261 kernel: audit: type=1300 audit(1769563922.214:1047): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9f8c5830 a2=3 a3=0 items=0 ppid=1 pid=7085 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=39 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:32:02.214000 audit[7085]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9f8c5830 a2=3 a3=0 items=0 ppid=1 pid=7085 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=39 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:32:02.214000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:32:02.527449 kernel: audit: type=1327 audit(1769563922.214:1047): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:32:02.529621 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 28 01:32:02.571000 audit[7085]: USER_START pid=7085 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:02.708251 kernel: audit: type=1105 audit(1769563922.571:1048): pid=7085 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:02.708385 kernel: audit: type=1103 audit(1769563922.583:1049): pid=7089 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:02.583000 audit[7089]: CRED_ACQ pid=7089 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:03.999106 sshd[7089]: Connection closed by 10.0.0.1 port 57772 Jan 28 01:32:04.010000 audit[7085]: USER_END pid=7085 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:04.006319 sshd-session[7085]: pam_unix(sshd:session): session closed for user core Jan 28 01:32:04.093518 systemd[1]: sshd@37-10.0.0.61:22-10.0.0.1:57772.service: Deactivated successfully. Jan 28 01:32:04.256945 kernel: audit: type=1106 audit(1769563924.010:1050): pid=7085 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:04.257237 kernel: audit: type=1104 audit(1769563924.010:1051): pid=7085 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:04.010000 audit[7085]: CRED_DISP pid=7085 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:04.118822 systemd[1]: session-39.scope: Deactivated successfully. Jan 28 01:32:04.255795 systemd-logind[1590]: Session 39 logged out. Waiting for processes to exit. Jan 28 01:32:04.286443 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Jan 28 01:32:04.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.61:22-10.0.0.1:57772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:04.313470 systemd-logind[1590]: Removed session 39. Jan 28 01:32:04.803846 systemd-tmpfiles[7103]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 28 01:32:04.803869 systemd-tmpfiles[7103]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 28 01:32:04.804820 systemd-tmpfiles[7103]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 01:32:04.819775 systemd-tmpfiles[7103]: ACLs are not supported, ignoring. Jan 28 01:32:04.820096 systemd-tmpfiles[7103]: ACLs are not supported, ignoring. Jan 28 01:32:04.860454 systemd-tmpfiles[7103]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:32:04.860475 systemd-tmpfiles[7103]: Skipping /boot Jan 28 01:32:04.921547 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Jan 28 01:32:04.924412 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Jan 28 01:32:04.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:04.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:05.103767 kubelet[2995]: E0128 01:32:05.102590 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:32:07.144718 containerd[1624]: time="2026-01-28T01:32:07.135281654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:32:07.293725 containerd[1624]: time="2026-01-28T01:32:07.293245582Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:32:07.309123 containerd[1624]: time="2026-01-28T01:32:07.303438772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 01:32:07.309123 containerd[1624]: time="2026-01-28T01:32:07.303603228Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:32:07.309325 kubelet[2995]: E0128 01:32:07.305763 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:32:07.309325 kubelet[2995]: E0128 01:32:07.307193 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:32:07.309325 kubelet[2995]: E0128 01:32:07.308288 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfmt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d85994946-hw4kg_calico-system(2c2d5c47-2f4c-4dc2-af4d-d250680defb0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:32:07.310566 kubelet[2995]: E0128 01:32:07.310440 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:32:09.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.61:22-10.0.0.1:39764 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:09.096629 systemd[1]: Started sshd@38-10.0.0.61:22-10.0.0.1:39764.service - OpenSSH per-connection server daemon (10.0.0.1:39764). Jan 28 01:32:09.131168 kernel: kauditd_printk_skb: 3 callbacks suppressed Jan 28 01:32:09.131315 kernel: audit: type=1130 audit(1769563929.096:1055): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.61:22-10.0.0.1:39764 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:09.146422 kubelet[2995]: E0128 01:32:09.146363 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:32:09.425000 audit[7107]: USER_ACCT pid=7107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:09.447119 sshd[7107]: Accepted publickey for core from 10.0.0.1 port 39764 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:32:09.449494 sshd-session[7107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:32:09.446000 audit[7107]: CRED_ACQ pid=7107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:09.494976 systemd-logind[1590]: New session 40 of user core. Jan 28 01:32:09.517313 kernel: audit: type=1101 audit(1769563929.425:1056): pid=7107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:09.517452 kernel: audit: type=1103 audit(1769563929.446:1057): pid=7107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:09.542135 kernel: audit: type=1006 audit(1769563929.446:1058): pid=7107 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=40 res=1 Jan 28 01:32:09.542282 kernel: audit: type=1300 audit(1769563929.446:1058): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff29d8ae20 a2=3 a3=0 items=0 ppid=1 pid=7107 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=40 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:32:09.446000 audit[7107]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff29d8ae20 a2=3 a3=0 items=0 ppid=1 pid=7107 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=40 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:32:09.553359 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 28 01:32:09.640419 kernel: audit: type=1327 audit(1769563929.446:1058): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:32:09.446000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:32:09.621000 audit[7107]: USER_START pid=7107 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:09.777327 kernel: audit: type=1105 audit(1769563929.621:1059): pid=7107 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:09.777441 kernel: audit: type=1103 audit(1769563929.635:1060): pid=7111 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:09.635000 audit[7111]: CRED_ACQ pid=7111 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:10.174405 sshd[7111]: Connection closed by 10.0.0.1 port 39764 Jan 28 01:32:10.179146 sshd-session[7107]: pam_unix(sshd:session): session closed for user core Jan 28 01:32:10.184000 audit[7107]: USER_END pid=7107 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:10.210477 systemd[1]: sshd@38-10.0.0.61:22-10.0.0.1:39764.service: Deactivated successfully. Jan 28 01:32:10.246113 systemd[1]: session-40.scope: Deactivated successfully. Jan 28 01:32:10.324201 kernel: audit: type=1106 audit(1769563930.184:1061): pid=7107 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:10.325346 kernel: audit: type=1104 audit(1769563930.184:1062): pid=7107 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:10.184000 audit[7107]: CRED_DISP pid=7107 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:10.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.61:22-10.0.0.1:39764 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:10.282775 systemd-logind[1590]: Session 40 logged out. Waiting for processes to exit. Jan 28 01:32:10.303096 systemd-logind[1590]: Removed session 40. Jan 28 01:32:11.102584 kubelet[2995]: E0128 01:32:11.097463 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:32:12.077146 kubelet[2995]: E0128 01:32:12.076573 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:32:13.076495 kubelet[2995]: E0128 01:32:13.070818 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:32:14.069551 kubelet[2995]: E0128 01:32:14.069149 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:32:15.236074 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:32:15.236214 kernel: audit: type=1130 audit(1769563935.228:1064): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.61:22-10.0.0.1:49400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:15.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.61:22-10.0.0.1:49400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:15.229219 systemd[1]: Started sshd@39-10.0.0.61:22-10.0.0.1:49400.service - OpenSSH per-connection server daemon (10.0.0.1:49400). Jan 28 01:32:15.459000 audit[7125]: USER_ACCT pid=7125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:15.464249 sshd[7125]: Accepted publickey for core from 10.0.0.1 port 49400 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:32:15.476287 sshd-session[7125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:32:15.470000 audit[7125]: CRED_ACQ pid=7125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:15.501650 systemd-logind[1590]: New session 41 of user core. Jan 28 01:32:15.528845 kernel: audit: type=1101 audit(1769563935.459:1065): pid=7125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:15.530197 kernel: audit: type=1103 audit(1769563935.470:1066): pid=7125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:15.470000 audit[7125]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2213b210 a2=3 a3=0 items=0 ppid=1 pid=7125 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=41 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:32:15.551519 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 28 01:32:15.577181 kernel: audit: type=1006 audit(1769563935.470:1067): pid=7125 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=41 res=1 Jan 28 01:32:15.577233 kernel: audit: type=1300 audit(1769563935.470:1067): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2213b210 a2=3 a3=0 items=0 ppid=1 pid=7125 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=41 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:32:15.470000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:32:15.623145 kernel: audit: type=1327 audit(1769563935.470:1067): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:32:15.623290 kernel: audit: type=1105 audit(1769563935.567:1068): pid=7125 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:15.567000 audit[7125]: USER_START pid=7125 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:15.578000 audit[7129]: CRED_ACQ pid=7129 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:15.655139 kernel: audit: type=1103 audit(1769563935.578:1069): pid=7129 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:16.025281 sshd[7129]: Connection closed by 10.0.0.1 port 49400 Jan 28 01:32:16.032369 sshd-session[7125]: pam_unix(sshd:session): session closed for user core Jan 28 01:32:16.031000 audit[7125]: USER_END pid=7125 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:16.066584 systemd[1]: sshd@39-10.0.0.61:22-10.0.0.1:49400.service: Deactivated successfully. Jan 28 01:32:16.076933 kubelet[2995]: E0128 01:32:16.076188 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:32:16.109106 kernel: audit: type=1106 audit(1769563936.031:1070): pid=7125 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:16.086820 systemd[1]: session-41.scope: Deactivated successfully. Jan 28 01:32:16.031000 audit[7125]: CRED_DISP pid=7125 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:16.132819 systemd-logind[1590]: Session 41 logged out. Waiting for processes to exit. Jan 28 01:32:16.136418 systemd-logind[1590]: Removed session 41. Jan 28 01:32:16.188180 kernel: audit: type=1104 audit(1769563936.031:1071): pid=7125 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:16.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.61:22-10.0.0.1:49400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:19.068194 kubelet[2995]: E0128 01:32:19.067493 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:32:20.080764 kubelet[2995]: E0128 01:32:20.080608 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:32:21.094393 kubelet[2995]: E0128 01:32:21.094343 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:32:21.215650 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:32:21.215817 kernel: audit: type=1130 audit(1769563941.135:1073): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.61:22-10.0.0.1:49410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:21.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.61:22-10.0.0.1:49410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:21.135734 systemd[1]: Started sshd@40-10.0.0.61:22-10.0.0.1:49410.service - OpenSSH per-connection server daemon (10.0.0.1:49410). Jan 28 01:32:21.606000 audit[7185]: USER_ACCT pid=7185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:21.614622 sshd[7185]: Accepted publickey for core from 10.0.0.1 port 49410 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:32:21.628148 sshd-session[7185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:32:21.685956 kernel: audit: type=1101 audit(1769563941.606:1074): pid=7185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:21.686367 kernel: audit: type=1103 audit(1769563941.611:1075): pid=7185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:21.611000 audit[7185]: CRED_ACQ pid=7185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:21.701980 systemd-logind[1590]: New session 42 of user core. Jan 28 01:32:21.770694 kernel: audit: type=1006 audit(1769563941.611:1076): pid=7185 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=42 res=1 Jan 28 01:32:21.822446 kernel: audit: type=1300 audit(1769563941.611:1076): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1db6ce80 a2=3 a3=0 items=0 ppid=1 pid=7185 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=42 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:32:21.611000 audit[7185]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1db6ce80 a2=3 a3=0 items=0 ppid=1 pid=7185 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=42 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:32:21.863391 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 28 01:32:21.901854 kernel: audit: type=1327 audit(1769563941.611:1076): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:32:21.611000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:32:21.939000 audit[7185]: USER_START pid=7185 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:22.044597 kernel: audit: type=1105 audit(1769563941.939:1077): pid=7185 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:22.044741 kernel: audit: type=1103 audit(1769563942.011:1078): pid=7189 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:22.011000 audit[7189]: CRED_ACQ pid=7189 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:22.093169 kubelet[2995]: E0128 01:32:22.084298 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:32:22.107719 kubelet[2995]: E0128 01:32:22.102616 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:32:22.891600 sshd[7189]: Connection closed by 10.0.0.1 port 49410 Jan 28 01:32:22.893422 sshd-session[7185]: pam_unix(sshd:session): session closed for user core Jan 28 01:32:22.897000 audit[7185]: USER_END pid=7185 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:22.924780 systemd[1]: sshd@40-10.0.0.61:22-10.0.0.1:49410.service: Deactivated successfully. Jan 28 01:32:22.958536 systemd[1]: session-42.scope: Deactivated successfully. Jan 28 01:32:22.964101 kernel: audit: type=1106 audit(1769563942.897:1079): pid=7185 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:22.964942 systemd-logind[1590]: Session 42 logged out. Waiting for processes to exit. Jan 28 01:32:22.902000 audit[7185]: CRED_DISP pid=7185 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:22.982398 systemd-logind[1590]: Removed session 42. Jan 28 01:32:23.018129 kernel: audit: type=1104 audit(1769563942.902:1080): pid=7185 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:22.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.61:22-10.0.0.1:49410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:25.131136 kubelet[2995]: E0128 01:32:25.119443 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:32:27.987320 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:32:27.987477 kernel: audit: type=1130 audit(1769563947.956:1082): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.61:22-10.0.0.1:35306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:27.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.61:22-10.0.0.1:35306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:27.957492 systemd[1]: Started sshd@41-10.0.0.61:22-10.0.0.1:35306.service - OpenSSH per-connection server daemon (10.0.0.1:35306). Jan 28 01:32:28.295000 audit[7204]: USER_ACCT pid=7204 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:28.304934 sshd[7204]: Accepted publickey for core from 10.0.0.1 port 35306 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:32:28.316696 sshd-session[7204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:32:28.354372 kernel: audit: type=1101 audit(1769563948.295:1083): pid=7204 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:28.309000 audit[7204]: CRED_ACQ pid=7204 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:28.412832 kernel: audit: type=1103 audit(1769563948.309:1084): pid=7204 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:28.416134 kernel: audit: type=1006 audit(1769563948.309:1085): pid=7204 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=43 res=1 Jan 28 01:32:28.309000 audit[7204]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff47800c90 a2=3 a3=0 items=0 ppid=1 pid=7204 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=43 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:32:28.437615 systemd-logind[1590]: New session 43 of user core. Jan 28 01:32:28.465417 kernel: audit: type=1300 audit(1769563948.309:1085): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff47800c90 a2=3 a3=0 items=0 ppid=1 pid=7204 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=43 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:32:28.465657 kernel: audit: type=1327 audit(1769563948.309:1085): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:32:28.309000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:32:28.487407 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 28 01:32:28.530000 audit[7204]: USER_START pid=7204 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:28.600792 kernel: audit: type=1105 audit(1769563948.530:1086): pid=7204 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:28.601153 kernel: audit: type=1103 audit(1769563948.553:1087): pid=7208 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:28.553000 audit[7208]: CRED_ACQ pid=7208 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:29.091493 kubelet[2995]: E0128 01:32:29.088667 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:32:29.374938 sshd[7208]: Connection closed by 10.0.0.1 port 35306 Jan 28 01:32:29.375295 sshd-session[7204]: pam_unix(sshd:session): session closed for user core Jan 28 01:32:29.378000 audit[7204]: USER_END pid=7204 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:29.384867 systemd-logind[1590]: Session 43 logged out. Waiting for processes to exit. Jan 28 01:32:29.390461 systemd[1]: sshd@41-10.0.0.61:22-10.0.0.1:35306.service: Deactivated successfully. Jan 28 01:32:29.402574 kernel: audit: type=1106 audit(1769563949.378:1088): pid=7204 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:29.406941 systemd[1]: session-43.scope: Deactivated successfully. Jan 28 01:32:29.379000 audit[7204]: CRED_DISP pid=7204 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:29.413269 systemd-logind[1590]: Removed session 43. Jan 28 01:32:29.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.61:22-10.0.0.1:35306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:29.442339 kernel: audit: type=1104 audit(1769563949.379:1089): pid=7204 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:34.063835 kubelet[2995]: E0128 01:32:34.063482 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:32:34.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.61:22-10.0.0.1:46288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:34.451183 systemd[1]: Started sshd@42-10.0.0.61:22-10.0.0.1:46288.service - OpenSSH per-connection server daemon (10.0.0.1:46288). Jan 28 01:32:34.493622 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:32:34.493722 kernel: audit: type=1130 audit(1769563954.450:1091): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.61:22-10.0.0.1:46288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:34.699000 audit[7230]: USER_ACCT pid=7230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:34.712414 sshd[7230]: Accepted publickey for core from 10.0.0.1 port 46288 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:32:34.725979 sshd-session[7230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:32:34.740801 systemd-logind[1590]: New session 44 of user core. Jan 28 01:32:34.745577 kernel: audit: type=1101 audit(1769563954.699:1092): pid=7230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:34.714000 audit[7230]: CRED_ACQ pid=7230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:34.774164 kernel: audit: type=1103 audit(1769563954.714:1093): pid=7230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:34.776177 kernel: audit: type=1006 audit(1769563954.714:1094): pid=7230 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=44 res=1 Jan 28 01:32:34.802127 kernel: audit: type=1300 audit(1769563954.714:1094): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd8ac35d60 a2=3 a3=0 items=0 ppid=1 pid=7230 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=44 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:32:34.714000 audit[7230]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd8ac35d60 a2=3 a3=0 items=0 ppid=1 pid=7230 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=44 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:32:34.809259 kernel: audit: type=1327 audit(1769563954.714:1094): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:32:34.714000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:32:34.806686 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 28 01:32:34.825000 audit[7230]: USER_START pid=7230 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:34.878127 kernel: audit: type=1105 audit(1769563954.825:1095): pid=7230 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:34.878273 kernel: audit: type=1103 audit(1769563954.841:1096): pid=7235 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:34.841000 audit[7235]: CRED_ACQ pid=7235 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:35.081932 kubelet[2995]: E0128 01:32:35.075372 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:32:35.114649 kubelet[2995]: E0128 01:32:35.114278 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:32:35.253268 sshd[7235]: Connection closed by 10.0.0.1 port 46288 Jan 28 01:32:35.256321 sshd-session[7230]: pam_unix(sshd:session): session closed for user core Jan 28 01:32:35.266000 audit[7230]: USER_END pid=7230 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:35.279723 systemd[1]: sshd@42-10.0.0.61:22-10.0.0.1:46288.service: Deactivated successfully. Jan 28 01:32:35.291286 systemd[1]: session-44.scope: Deactivated successfully. Jan 28 01:32:35.296657 kernel: audit: type=1106 audit(1769563955.266:1097): pid=7230 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:35.296774 kernel: audit: type=1104 audit(1769563955.266:1098): pid=7230 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:35.266000 audit[7230]: CRED_DISP pid=7230 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:35.295799 systemd-logind[1590]: Session 44 logged out. Waiting for processes to exit. Jan 28 01:32:35.298841 systemd-logind[1590]: Removed session 44. Jan 28 01:32:35.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.61:22-10.0.0.1:46288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:36.319844 kubelet[2995]: E0128 01:32:36.319697 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:32:41.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.61:22-10.0.0.1:46298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:41.518112 systemd[1]: Started sshd@43-10.0.0.61:22-10.0.0.1:46298.service - OpenSSH per-connection server daemon (10.0.0.1:46298). Jan 28 01:32:41.528160 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:32:41.528357 kernel: audit: type=1130 audit(1769563961.517:1100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.61:22-10.0.0.1:46298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:41.796404 kubelet[2995]: E0128 01:32:41.791111 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:32:41.979306 kubelet[2995]: E0128 01:32:41.971574 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:32:42.022684 kubelet[2995]: E0128 01:32:42.004792 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:32:42.492000 audit[7250]: USER_ACCT pid=7250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:42.501279 sshd[7250]: Accepted publickey for core from 10.0.0.1 port 46298 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:32:42.578371 kernel: audit: type=1101 audit(1769563962.492:1101): pid=7250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:42.534381 sshd-session[7250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:32:42.507000 audit[7250]: CRED_ACQ pid=7250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:42.635104 systemd-logind[1590]: New session 45 of user core. Jan 28 01:32:42.683964 kernel: audit: type=1103 audit(1769563962.507:1102): pid=7250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:42.718301 kernel: audit: type=1006 audit(1769563962.507:1103): pid=7250 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=45 res=1 Jan 28 01:32:42.507000 audit[7250]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc584fb1b0 a2=3 a3=0 items=0 ppid=1 pid=7250 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=45 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:32:42.507000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:32:42.804921 kernel: audit: type=1300 audit(1769563962.507:1103): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc584fb1b0 a2=3 a3=0 items=0 ppid=1 pid=7250 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=45 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:32:42.805179 kernel: audit: type=1327 audit(1769563962.507:1103): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:32:42.814488 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 28 01:32:42.837000 audit[7250]: USER_START pid=7250 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:42.935157 kernel: audit: type=1105 audit(1769563962.837:1104): pid=7250 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:42.935367 kernel: audit: type=1103 audit(1769563962.881:1105): pid=7254 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:42.881000 audit[7254]: CRED_ACQ pid=7254 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:43.142677 kubelet[2995]: E0128 01:32:43.134408 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:32:44.178667 sshd[7254]: Connection closed by 10.0.0.1 port 46298 Jan 28 01:32:44.199393 sshd-session[7250]: pam_unix(sshd:session): session closed for user core Jan 28 01:32:44.477390 kernel: audit: type=1106 audit(1769563964.222:1106): pid=7250 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:44.477542 kernel: audit: type=1104 audit(1769563964.222:1107): pid=7250 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:44.222000 audit[7250]: USER_END pid=7250 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:44.222000 audit[7250]: CRED_DISP pid=7250 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:44.296222 systemd-logind[1590]: Session 45 logged out. Waiting for processes to exit. Jan 28 01:32:44.322105 systemd[1]: sshd@43-10.0.0.61:22-10.0.0.1:46298.service: Deactivated successfully. Jan 28 01:32:44.377632 systemd[1]: session-45.scope: Deactivated successfully. Jan 28 01:32:44.432931 systemd-logind[1590]: Removed session 45. Jan 28 01:32:44.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.61:22-10.0.0.1:46298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:45.076829 kubelet[2995]: E0128 01:32:45.073590 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:32:46.061721 kubelet[2995]: E0128 01:32:46.061503 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:32:49.255505 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:32:49.255609 kernel: audit: type=1130 audit(1769563969.232:1109): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.61:22-10.0.0.1:34662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:49.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.61:22-10.0.0.1:34662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:49.232694 systemd[1]: Started sshd@44-10.0.0.61:22-10.0.0.1:34662.service - OpenSSH per-connection server daemon (10.0.0.1:34662). Jan 28 01:32:50.018000 audit[7268]: USER_ACCT pid=7268 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:50.029450 sshd[7268]: Accepted publickey for core from 10.0.0.1 port 34662 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:32:50.045927 sshd-session[7268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:32:50.079285 kernel: audit: type=1101 audit(1769563970.018:1110): pid=7268 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:50.085268 kubelet[2995]: E0128 01:32:50.084685 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:32:50.024000 audit[7268]: CRED_ACQ pid=7268 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:50.104802 kubelet[2995]: E0128 01:32:50.094455 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:32:50.091637 systemd-logind[1590]: New session 46 of user core. Jan 28 01:32:50.182713 kernel: audit: type=1103 audit(1769563970.024:1111): pid=7268 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:50.197168 kernel: audit: type=1006 audit(1769563970.024:1112): pid=7268 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=46 res=1 Jan 28 01:32:50.198104 kernel: audit: type=1300 audit(1769563970.024:1112): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9dca12a0 a2=3 a3=0 items=0 ppid=1 pid=7268 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=46 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:32:50.024000 audit[7268]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9dca12a0 a2=3 a3=0 items=0 ppid=1 pid=7268 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=46 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:32:50.198319 kubelet[2995]: E0128 01:32:50.163660 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:32:50.275353 kernel: audit: type=1327 audit(1769563970.024:1112): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:32:50.024000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:32:50.266131 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 28 01:32:50.293000 audit[7268]: USER_START pid=7268 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:50.372475 kernel: audit: type=1105 audit(1769563970.293:1113): pid=7268 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:50.303000 audit[7296]: CRED_ACQ pid=7296 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:50.419705 kernel: audit: type=1103 audit(1769563970.303:1114): pid=7296 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:51.311139 sshd[7296]: Connection closed by 10.0.0.1 port 34662 Jan 28 01:32:51.315782 sshd-session[7268]: pam_unix(sshd:session): session closed for user core Jan 28 01:32:51.326000 audit[7268]: USER_END pid=7268 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:51.384823 systemd[1]: sshd@44-10.0.0.61:22-10.0.0.1:34662.service: Deactivated successfully. Jan 28 01:32:51.403152 kernel: audit: type=1106 audit(1769563971.326:1115): pid=7268 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:51.406464 systemd[1]: session-46.scope: Deactivated successfully. Jan 28 01:32:51.326000 audit[7268]: CRED_DISP pid=7268 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:51.421233 systemd-logind[1590]: Session 46 logged out. Waiting for processes to exit. Jan 28 01:32:51.435748 systemd-logind[1590]: Removed session 46. Jan 28 01:32:51.498987 kernel: audit: type=1104 audit(1769563971.326:1116): pid=7268 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:51.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.61:22-10.0.0.1:34662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:56.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.61:22-10.0.0.1:43318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:56.333727 systemd[1]: Started sshd@45-10.0.0.61:22-10.0.0.1:43318.service - OpenSSH per-connection server daemon (10.0.0.1:43318). Jan 28 01:32:56.368571 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:32:56.368644 kernel: audit: type=1130 audit(1769563976.333:1118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.61:22-10.0.0.1:43318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:32:56.886000 audit[7315]: USER_ACCT pid=7315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:56.896376 sshd[7315]: Accepted publickey for core from 10.0.0.1 port 43318 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:32:56.902634 sshd-session[7315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:32:56.983396 kernel: audit: type=1101 audit(1769563976.886:1119): pid=7315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:56.983555 kernel: audit: type=1103 audit(1769563976.887:1120): pid=7315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:56.887000 audit[7315]: CRED_ACQ pid=7315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:57.003131 systemd-logind[1590]: New session 47 of user core. Jan 28 01:32:57.014701 kernel: audit: type=1006 audit(1769563976.887:1121): pid=7315 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=47 res=1 Jan 28 01:32:57.015456 kernel: audit: type=1300 audit(1769563976.887:1121): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd29fee10 a2=3 a3=0 items=0 ppid=1 pid=7315 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=47 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:32:56.887000 audit[7315]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd29fee10 a2=3 a3=0 items=0 ppid=1 pid=7315 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=47 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:32:57.038521 kernel: audit: type=1327 audit(1769563976.887:1121): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:32:56.887000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:32:57.064573 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 28 01:32:57.067769 kubelet[2995]: E0128 01:32:57.067609 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:32:57.097422 kubelet[2995]: E0128 01:32:57.090226 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:32:57.100000 audit[7315]: USER_START pid=7315 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:57.175166 kernel: audit: type=1105 audit(1769563977.100:1122): pid=7315 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:57.117000 audit[7319]: CRED_ACQ pid=7319 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:57.236744 kernel: audit: type=1103 audit(1769563977.117:1123): pid=7319 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:58.103674 kubelet[2995]: E0128 01:32:58.103603 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:32:58.167077 sshd[7319]: Connection closed by 10.0.0.1 port 43318 Jan 28 01:32:58.183441 sshd-session[7315]: pam_unix(sshd:session): session closed for user core Jan 28 01:32:58.191000 audit[7315]: USER_END pid=7315 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:58.230832 systemd-logind[1590]: Session 47 logged out. Waiting for processes to exit. Jan 28 01:32:58.251761 systemd[1]: sshd@45-10.0.0.61:22-10.0.0.1:43318.service: Deactivated successfully. Jan 28 01:32:58.284740 systemd[1]: session-47.scope: Deactivated successfully. Jan 28 01:32:58.199000 audit[7315]: CRED_DISP pid=7315 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:58.356298 systemd-logind[1590]: Removed session 47. Jan 28 01:32:58.422174 kernel: audit: type=1106 audit(1769563978.191:1124): pid=7315 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:58.422318 kernel: audit: type=1104 audit(1769563978.199:1125): pid=7315 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:32:58.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.61:22-10.0.0.1:43318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:02.071197 kubelet[2995]: E0128 01:33:02.069566 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:33:03.236655 systemd[1]: Started sshd@46-10.0.0.61:22-10.0.0.1:55822.service - OpenSSH per-connection server daemon (10.0.0.1:55822). Jan 28 01:33:03.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.61:22-10.0.0.1:55822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:03.267311 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:33:03.267438 kernel: audit: type=1130 audit(1769563983.235:1127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.61:22-10.0.0.1:55822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:03.587000 audit[7335]: USER_ACCT pid=7335 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:03.594575 sshd[7335]: Accepted publickey for core from 10.0.0.1 port 55822 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:33:03.615926 sshd-session[7335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:33:03.654205 kernel: audit: type=1101 audit(1769563983.587:1128): pid=7335 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:03.608000 audit[7335]: CRED_ACQ pid=7335 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:03.682225 systemd-logind[1590]: New session 48 of user core. Jan 28 01:33:03.711162 kernel: audit: type=1103 audit(1769563983.608:1129): pid=7335 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:03.711295 kernel: audit: type=1006 audit(1769563983.608:1130): pid=7335 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=48 res=1 Jan 28 01:33:03.711351 kernel: audit: type=1300 audit(1769563983.608:1130): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6fc9f4b0 a2=3 a3=0 items=0 ppid=1 pid=7335 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=48 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:33:03.608000 audit[7335]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6fc9f4b0 a2=3 a3=0 items=0 ppid=1 pid=7335 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=48 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:33:03.608000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:33:03.763482 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 28 01:33:03.775795 kernel: audit: type=1327 audit(1769563983.608:1130): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:33:03.784000 audit[7335]: USER_START pid=7335 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:03.830255 kernel: audit: type=1105 audit(1769563983.784:1131): pid=7335 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:03.830399 kernel: audit: type=1103 audit(1769563983.788:1132): pid=7339 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:03.788000 audit[7339]: CRED_ACQ pid=7339 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:04.083598 kubelet[2995]: E0128 01:33:04.083384 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:33:04.426080 sshd[7339]: Connection closed by 10.0.0.1 port 55822 Jan 28 01:33:04.426716 sshd-session[7335]: pam_unix(sshd:session): session closed for user core Jan 28 01:33:04.445000 audit[7335]: USER_END pid=7335 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:04.467191 systemd-logind[1590]: Session 48 logged out. Waiting for processes to exit. Jan 28 01:33:04.469805 systemd[1]: sshd@46-10.0.0.61:22-10.0.0.1:55822.service: Deactivated successfully. Jan 28 01:33:04.519284 kernel: audit: type=1106 audit(1769563984.445:1133): pid=7335 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:04.518927 systemd[1]: session-48.scope: Deactivated successfully. Jan 28 01:33:04.446000 audit[7335]: CRED_DISP pid=7335 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:04.533257 systemd-logind[1590]: Removed session 48. Jan 28 01:33:04.572899 kernel: audit: type=1104 audit(1769563984.446:1134): pid=7335 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:04.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.61:22-10.0.0.1:55822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:05.088805 kubelet[2995]: E0128 01:33:05.087850 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:33:09.093125 kubelet[2995]: E0128 01:33:09.091468 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:33:09.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-10.0.0.61:22-10.0.0.1:55842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:09.483594 systemd[1]: Started sshd@47-10.0.0.61:22-10.0.0.1:55842.service - OpenSSH per-connection server daemon (10.0.0.1:55842). Jan 28 01:33:09.536222 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:33:09.536380 kernel: audit: type=1130 audit(1769563989.493:1136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-10.0.0.61:22-10.0.0.1:55842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:09.775000 audit[7353]: USER_ACCT pid=7353 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:09.834134 kernel: audit: type=1101 audit(1769563989.775:1137): pid=7353 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:09.800733 sshd-session[7353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:33:09.849483 sshd[7353]: Accepted publickey for core from 10.0.0.1 port 55842 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:33:09.795000 audit[7353]: CRED_ACQ pid=7353 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:09.851730 systemd-logind[1590]: New session 49 of user core. Jan 28 01:33:09.925106 kernel: audit: type=1103 audit(1769563989.795:1138): pid=7353 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:09.925246 kernel: audit: type=1006 audit(1769563989.795:1139): pid=7353 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=49 res=1 Jan 28 01:33:09.795000 audit[7353]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4b0ab840 a2=3 a3=0 items=0 ppid=1 pid=7353 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=49 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:33:09.997232 kernel: audit: type=1300 audit(1769563989.795:1139): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4b0ab840 a2=3 a3=0 items=0 ppid=1 pid=7353 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=49 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:33:09.795000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:33:10.031145 kernel: audit: type=1327 audit(1769563989.795:1139): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:33:10.045488 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 28 01:33:10.091000 audit[7353]: USER_START pid=7353 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:10.192158 kernel: audit: type=1105 audit(1769563990.091:1140): pid=7353 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:10.131000 audit[7357]: CRED_ACQ pid=7357 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:10.261247 kernel: audit: type=1103 audit(1769563990.131:1141): pid=7357 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:10.938548 sshd[7357]: Connection closed by 10.0.0.1 port 55842 Jan 28 01:33:10.944650 sshd-session[7353]: pam_unix(sshd:session): session closed for user core Jan 28 01:33:10.957000 audit[7353]: USER_END pid=7353 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:10.981195 systemd[1]: sshd@47-10.0.0.61:22-10.0.0.1:55842.service: Deactivated successfully. Jan 28 01:33:10.992798 systemd[1]: session-49.scope: Deactivated successfully. Jan 28 01:33:11.008874 systemd-logind[1590]: Session 49 logged out. Waiting for processes to exit. Jan 28 01:33:11.034190 kernel: audit: type=1106 audit(1769563990.957:1142): pid=7353 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:11.034317 kernel: audit: type=1104 audit(1769563990.958:1143): pid=7353 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:10.958000 audit[7353]: CRED_DISP pid=7353 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:11.049248 systemd-logind[1590]: Removed session 49. Jan 28 01:33:10.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-10.0.0.61:22-10.0.0.1:55842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:11.091553 kubelet[2995]: E0128 01:33:11.091499 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:33:11.093285 kubelet[2995]: E0128 01:33:11.092720 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:33:14.067149 kubelet[2995]: E0128 01:33:14.065258 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:33:15.075772 kubelet[2995]: E0128 01:33:15.075709 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:33:15.995486 systemd[1]: Started sshd@48-10.0.0.61:22-10.0.0.1:59250.service - OpenSSH per-connection server daemon (10.0.0.1:59250). Jan 28 01:33:16.001625 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:33:16.001704 kernel: audit: type=1130 audit(1769563995.994:1145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.61:22-10.0.0.1:59250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:15.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.61:22-10.0.0.1:59250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:16.065112 kubelet[2995]: E0128 01:33:16.064362 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:33:16.320560 sshd[7371]: Accepted publickey for core from 10.0.0.1 port 59250 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:33:16.312000 audit[7371]: USER_ACCT pid=7371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:16.327432 sshd-session[7371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:33:16.322000 audit[7371]: CRED_ACQ pid=7371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:16.395411 systemd-logind[1590]: New session 50 of user core. Jan 28 01:33:16.452888 kernel: audit: type=1101 audit(1769563996.312:1146): pid=7371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:16.453194 kernel: audit: type=1103 audit(1769563996.322:1147): pid=7371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:16.454887 kernel: audit: type=1006 audit(1769563996.322:1148): pid=7371 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=50 res=1 Jan 28 01:33:16.322000 audit[7371]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcfdd97590 a2=3 a3=0 items=0 ppid=1 pid=7371 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=50 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:33:16.531115 kernel: audit: type=1300 audit(1769563996.322:1148): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcfdd97590 a2=3 a3=0 items=0 ppid=1 pid=7371 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=50 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:33:16.531328 kernel: audit: type=1327 audit(1769563996.322:1148): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:33:16.322000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:33:16.552190 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 28 01:33:16.574000 audit[7371]: USER_START pid=7371 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:16.640506 kernel: audit: type=1105 audit(1769563996.574:1149): pid=7371 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:16.640676 kernel: audit: type=1103 audit(1769563996.585:1150): pid=7375 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:16.585000 audit[7375]: CRED_ACQ pid=7375 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:17.209215 sshd[7375]: Connection closed by 10.0.0.1 port 59250 Jan 28 01:33:17.200212 sshd-session[7371]: pam_unix(sshd:session): session closed for user core Jan 28 01:33:17.208000 audit[7371]: USER_END pid=7371 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:17.253418 systemd[1]: sshd@48-10.0.0.61:22-10.0.0.1:59250.service: Deactivated successfully. Jan 28 01:33:17.274412 systemd[1]: session-50.scope: Deactivated successfully. Jan 28 01:33:17.291219 systemd-logind[1590]: Session 50 logged out. Waiting for processes to exit. Jan 28 01:33:17.336516 kernel: audit: type=1106 audit(1769563997.208:1151): pid=7371 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:17.336681 kernel: audit: type=1104 audit(1769563997.208:1152): pid=7371 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:17.208000 audit[7371]: CRED_DISP pid=7371 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:17.310500 systemd-logind[1590]: Removed session 50. Jan 28 01:33:17.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.61:22-10.0.0.1:59250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:18.063423 kubelet[2995]: E0128 01:33:18.059777 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:33:19.070802 kubelet[2995]: E0128 01:33:19.070714 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:33:22.087956 kubelet[2995]: E0128 01:33:22.085240 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:33:22.238601 systemd[1]: Started sshd@49-10.0.0.61:22-10.0.0.1:59266.service - OpenSSH per-connection server daemon (10.0.0.1:59266). Jan 28 01:33:22.276143 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:33:22.276305 kernel: audit: type=1130 audit(1769564002.236:1154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-10.0.0.61:22-10.0.0.1:59266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:22.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-10.0.0.61:22-10.0.0.1:59266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:22.613000 audit[7414]: USER_ACCT pid=7414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:22.616942 sshd[7414]: Accepted publickey for core from 10.0.0.1 port 59266 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:33:22.663532 sshd-session[7414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:33:22.715250 kernel: audit: type=1101 audit(1769564002.613:1155): pid=7414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:22.633000 audit[7414]: CRED_ACQ pid=7414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:22.789504 systemd-logind[1590]: New session 51 of user core. Jan 28 01:33:22.835248 kernel: audit: type=1103 audit(1769564002.633:1156): pid=7414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:22.835396 kernel: audit: type=1006 audit(1769564002.633:1157): pid=7414 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=51 res=1 Jan 28 01:33:22.835468 kernel: audit: type=1300 audit(1769564002.633:1157): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8b217780 a2=3 a3=0 items=0 ppid=1 pid=7414 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:33:22.633000 audit[7414]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8b217780 a2=3 a3=0 items=0 ppid=1 pid=7414 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:33:22.881365 kernel: audit: type=1327 audit(1769564002.633:1157): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:33:22.633000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:33:22.895368 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 28 01:33:22.961000 audit[7414]: USER_START pid=7414 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:23.088204 kernel: audit: type=1105 audit(1769564002.961:1158): pid=7414 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:23.059000 audit[7418]: CRED_ACQ pid=7418 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:23.216235 kernel: audit: type=1103 audit(1769564003.059:1159): pid=7418 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:23.486363 sshd[7418]: Connection closed by 10.0.0.1 port 59266 Jan 28 01:33:23.491721 sshd-session[7414]: pam_unix(sshd:session): session closed for user core Jan 28 01:33:23.500000 audit[7414]: USER_END pid=7414 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:23.514932 systemd[1]: sshd@49-10.0.0.61:22-10.0.0.1:59266.service: Deactivated successfully. Jan 28 01:33:23.523903 systemd[1]: session-51.scope: Deactivated successfully. Jan 28 01:33:23.540247 systemd-logind[1590]: Session 51 logged out. Waiting for processes to exit. Jan 28 01:33:23.570363 systemd-logind[1590]: Removed session 51. Jan 28 01:33:23.580646 kernel: audit: type=1106 audit(1769564003.500:1160): pid=7414 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:23.580740 kernel: audit: type=1104 audit(1769564003.501:1161): pid=7414 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:23.501000 audit[7414]: CRED_DISP pid=7414 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:23.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-10.0.0.61:22-10.0.0.1:59266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:24.087219 kubelet[2995]: E0128 01:33:24.082889 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:33:24.102584 kubelet[2995]: E0128 01:33:24.090277 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:33:25.072217 kubelet[2995]: E0128 01:33:25.070193 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:33:26.071529 kubelet[2995]: E0128 01:33:26.071266 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:33:28.074633 kubelet[2995]: E0128 01:33:28.064604 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:33:28.074633 kubelet[2995]: E0128 01:33:28.067202 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:33:28.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-10.0.0.61:22-10.0.0.1:36844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:28.535543 systemd[1]: Started sshd@50-10.0.0.61:22-10.0.0.1:36844.service - OpenSSH per-connection server daemon (10.0.0.1:36844). Jan 28 01:33:28.575121 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:33:28.575248 kernel: audit: type=1130 audit(1769564008.534:1163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-10.0.0.61:22-10.0.0.1:36844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:28.831000 audit[7431]: USER_ACCT pid=7431 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:28.844526 sshd[7431]: Accepted publickey for core from 10.0.0.1 port 36844 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:33:28.855081 sshd-session[7431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:33:28.847000 audit[7431]: CRED_ACQ pid=7431 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:28.887876 systemd-logind[1590]: New session 52 of user core. Jan 28 01:33:28.913815 kernel: audit: type=1101 audit(1769564008.831:1164): pid=7431 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:28.913965 kernel: audit: type=1103 audit(1769564008.847:1165): pid=7431 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:28.914563 kernel: audit: type=1006 audit(1769564008.848:1166): pid=7431 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=52 res=1 Jan 28 01:33:28.928111 kernel: audit: type=1300 audit(1769564008.848:1166): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb3a26f30 a2=3 a3=0 items=0 ppid=1 pid=7431 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=52 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:33:28.848000 audit[7431]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb3a26f30 a2=3 a3=0 items=0 ppid=1 pid=7431 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=52 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:33:28.965703 kernel: audit: type=1327 audit(1769564008.848:1166): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:33:28.848000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:33:28.964453 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 28 01:33:28.993000 audit[7431]: USER_START pid=7431 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:29.035155 kernel: audit: type=1105 audit(1769564008.993:1167): pid=7431 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:28.995000 audit[7435]: CRED_ACQ pid=7435 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:29.078272 kernel: audit: type=1103 audit(1769564008.995:1168): pid=7435 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:29.447949 sshd[7435]: Connection closed by 10.0.0.1 port 36844 Jan 28 01:33:29.449714 sshd-session[7431]: pam_unix(sshd:session): session closed for user core Jan 28 01:33:29.454000 audit[7431]: USER_END pid=7431 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:29.483145 kernel: audit: type=1106 audit(1769564009.454:1169): pid=7431 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:29.454000 audit[7431]: CRED_DISP pid=7431 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:29.484866 systemd[1]: sshd@50-10.0.0.61:22-10.0.0.1:36844.service: Deactivated successfully. Jan 28 01:33:29.489389 systemd[1]: session-52.scope: Deactivated successfully. Jan 28 01:33:29.494855 systemd-logind[1590]: Session 52 logged out. Waiting for processes to exit. Jan 28 01:33:29.497443 systemd-logind[1590]: Removed session 52. Jan 28 01:33:29.506548 kernel: audit: type=1104 audit(1769564009.454:1170): pid=7431 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:29.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-10.0.0.61:22-10.0.0.1:36844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:34.533464 systemd[1]: Started sshd@51-10.0.0.61:22-10.0.0.1:41306.service - OpenSSH per-connection server daemon (10.0.0.1:41306). Jan 28 01:33:34.623121 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:33:34.623249 kernel: audit: type=1130 audit(1769564014.532:1172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-10.0.0.61:22-10.0.0.1:41306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:34.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-10.0.0.61:22-10.0.0.1:41306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:34.876146 containerd[1624]: time="2026-01-28T01:33:34.870325748Z" level=info msg="container event discarded" container=db4612bc9e5b7b0aa1690679ae751c0b84d64ddd2ef8384ab8cc5871481bd9b2 type=CONTAINER_STOPPED_EVENT Jan 28 01:33:34.876146 containerd[1624]: time="2026-01-28T01:33:34.870415264Z" level=info msg="container event discarded" container=66714d9fe0971c920b68e90921dd5a364dd9fe47f54dda900904d370d1797212 type=CONTAINER_STOPPED_EVENT Jan 28 01:33:34.933000 audit[7452]: USER_ACCT pid=7452 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:34.965379 kernel: audit: type=1101 audit(1769564014.933:1173): pid=7452 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:34.969329 sshd[7452]: Accepted publickey for core from 10.0.0.1 port 41306 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:33:34.969000 audit[7452]: CRED_ACQ pid=7452 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:34.972886 sshd-session[7452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:33:35.020657 kernel: audit: type=1103 audit(1769564014.969:1174): pid=7452 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:35.020782 kernel: audit: type=1006 audit(1769564014.970:1175): pid=7452 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=53 res=1 Jan 28 01:33:35.015784 systemd-logind[1590]: New session 53 of user core. Jan 28 01:33:34.970000 audit[7452]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd98aa1c0 a2=3 a3=0 items=0 ppid=1 pid=7452 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=53 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:33:34.970000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:33:35.064864 kernel: audit: type=1300 audit(1769564014.970:1175): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd98aa1c0 a2=3 a3=0 items=0 ppid=1 pid=7452 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=53 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:33:35.067682 kernel: audit: type=1327 audit(1769564014.970:1175): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:33:35.064510 systemd[1]: Started session-53.scope - Session 53 of User core. Jan 28 01:33:35.083000 audit[7452]: USER_START pid=7452 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:35.121199 kernel: audit: type=1105 audit(1769564015.083:1176): pid=7452 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:35.095000 audit[7460]: CRED_ACQ pid=7460 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:35.157535 kernel: audit: type=1103 audit(1769564015.095:1177): pid=7460 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:35.668959 sshd[7460]: Connection closed by 10.0.0.1 port 41306 Jan 28 01:33:35.667596 sshd-session[7452]: pam_unix(sshd:session): session closed for user core Jan 28 01:33:35.674000 audit[7452]: USER_END pid=7452 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:35.721077 systemd-logind[1590]: Session 53 logged out. Waiting for processes to exit. Jan 28 01:33:35.721854 systemd[1]: sshd@51-10.0.0.61:22-10.0.0.1:41306.service: Deactivated successfully. Jan 28 01:33:35.762372 kernel: audit: type=1106 audit(1769564015.674:1178): pid=7452 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:35.674000 audit[7452]: CRED_DISP pid=7452 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:35.769116 systemd[1]: session-53.scope: Deactivated successfully. Jan 28 01:33:35.784914 systemd-logind[1590]: Removed session 53. Jan 28 01:33:35.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-10.0.0.61:22-10.0.0.1:41306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:35.826180 kernel: audit: type=1104 audit(1769564015.674:1179): pid=7452 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:36.067222 kubelet[2995]: E0128 01:33:36.064717 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:33:36.076466 containerd[1624]: time="2026-01-28T01:33:36.076374678Z" level=info msg="container event discarded" container=c06cf8ae82bdcf7cdece30a276a0a70d921c82f263ccd3a406f48b98e22886cf type=CONTAINER_CREATED_EVENT Jan 28 01:33:36.129393 containerd[1624]: time="2026-01-28T01:33:36.128424447Z" level=info msg="container event discarded" container=f91623b34d6b1235bb31655ab84d6d346cdbdbc89c295e96017342940666b977 type=CONTAINER_CREATED_EVENT Jan 28 01:33:37.068858 containerd[1624]: time="2026-01-28T01:33:37.068729312Z" level=info msg="container event discarded" container=c06cf8ae82bdcf7cdece30a276a0a70d921c82f263ccd3a406f48b98e22886cf type=CONTAINER_STARTED_EVENT Jan 28 01:33:37.070914 kubelet[2995]: E0128 01:33:37.069729 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:33:37.162601 containerd[1624]: time="2026-01-28T01:33:37.162506024Z" level=info msg="container event discarded" container=f91623b34d6b1235bb31655ab84d6d346cdbdbc89c295e96017342940666b977 type=CONTAINER_STARTED_EVENT Jan 28 01:33:40.084687 kubelet[2995]: E0128 01:33:40.078980 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:33:40.084687 kubelet[2995]: E0128 01:33:40.081154 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:33:40.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@52-10.0.0.61:22-10.0.0.1:41314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:40.810653 systemd[1]: Started sshd@52-10.0.0.61:22-10.0.0.1:41314.service - OpenSSH per-connection server daemon (10.0.0.1:41314). Jan 28 01:33:40.833632 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:33:40.833819 kernel: audit: type=1130 audit(1769564020.809:1181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@52-10.0.0.61:22-10.0.0.1:41314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:41.041374 sshd[7477]: Accepted publickey for core from 10.0.0.1 port 41314 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:33:41.087147 kernel: audit: type=1101 audit(1769564021.036:1182): pid=7477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:41.036000 audit[7477]: USER_ACCT pid=7477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:41.070958 sshd-session[7477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:33:41.052000 audit[7477]: CRED_ACQ pid=7477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:41.154666 systemd-logind[1590]: New session 54 of user core. Jan 28 01:33:41.157719 kernel: audit: type=1103 audit(1769564021.052:1183): pid=7477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:41.190181 kubelet[2995]: E0128 01:33:41.165261 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:33:41.052000 audit[7477]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2b89a070 a2=3 a3=0 items=0 ppid=1 pid=7477 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=54 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:33:41.236947 kernel: audit: type=1006 audit(1769564021.052:1184): pid=7477 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=54 res=1 Jan 28 01:33:41.237185 kernel: audit: type=1300 audit(1769564021.052:1184): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2b89a070 a2=3 a3=0 items=0 ppid=1 pid=7477 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=54 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:33:41.237243 kernel: audit: type=1327 audit(1769564021.052:1184): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:33:41.052000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:33:41.270941 systemd[1]: Started session-54.scope - Session 54 of User core. Jan 28 01:33:41.297000 audit[7477]: USER_START pid=7477 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:41.315000 audit[7481]: CRED_ACQ pid=7481 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:41.403938 kernel: audit: type=1105 audit(1769564021.297:1185): pid=7477 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:41.405371 kernel: audit: type=1103 audit(1769564021.315:1186): pid=7481 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:41.835206 sshd[7481]: Connection closed by 10.0.0.1 port 41314 Jan 28 01:33:41.836659 sshd-session[7477]: pam_unix(sshd:session): session closed for user core Jan 28 01:33:41.842000 audit[7477]: USER_END pid=7477 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:41.857710 systemd[1]: sshd@52-10.0.0.61:22-10.0.0.1:41314.service: Deactivated successfully. Jan 28 01:33:41.865727 systemd[1]: session-54.scope: Deactivated successfully. Jan 28 01:33:41.873483 systemd-logind[1590]: Session 54 logged out. Waiting for processes to exit. Jan 28 01:33:41.885127 systemd-logind[1590]: Removed session 54. Jan 28 01:33:41.903406 kernel: audit: type=1106 audit(1769564021.842:1187): pid=7477 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:41.903747 kernel: audit: type=1104 audit(1769564021.849:1188): pid=7477 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:41.849000 audit[7477]: CRED_DISP pid=7477 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:41.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@52-10.0.0.61:22-10.0.0.1:41314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:43.069732 kubelet[2995]: E0128 01:33:43.068899 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:33:44.072123 kubelet[2995]: E0128 01:33:44.064346 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:33:46.061535 kubelet[2995]: E0128 01:33:46.061345 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:33:46.890809 systemd[1]: Started sshd@53-10.0.0.61:22-10.0.0.1:46290.service - OpenSSH per-connection server daemon (10.0.0.1:46290). Jan 28 01:33:46.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@53-10.0.0.61:22-10.0.0.1:46290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:46.963380 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:33:46.963554 kernel: audit: type=1130 audit(1769564026.890:1190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@53-10.0.0.61:22-10.0.0.1:46290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:47.087101 kubelet[2995]: E0128 01:33:47.085983 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:33:47.136954 kernel: audit: type=1101 audit(1769564027.115:1191): pid=7496 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:47.115000 audit[7496]: USER_ACCT pid=7496 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:47.122422 sshd-session[7496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:33:47.139222 sshd[7496]: Accepted publickey for core from 10.0.0.1 port 46290 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:33:47.115000 audit[7496]: CRED_ACQ pid=7496 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:47.186168 kernel: audit: type=1103 audit(1769564027.115:1192): pid=7496 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:47.180726 systemd-logind[1590]: New session 55 of user core. Jan 28 01:33:47.115000 audit[7496]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4cac36c0 a2=3 a3=0 items=0 ppid=1 pid=7496 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=55 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:33:47.224905 systemd[1]: Started session-55.scope - Session 55 of User core. Jan 28 01:33:47.230197 kernel: audit: type=1006 audit(1769564027.115:1193): pid=7496 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=55 res=1 Jan 28 01:33:47.230301 kernel: audit: type=1300 audit(1769564027.115:1193): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4cac36c0 a2=3 a3=0 items=0 ppid=1 pid=7496 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=55 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:33:47.235333 kernel: audit: type=1327 audit(1769564027.115:1193): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:33:47.115000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:33:47.266000 audit[7496]: USER_START pid=7496 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:47.289000 audit[7501]: CRED_ACQ pid=7501 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:47.373572 kernel: audit: type=1105 audit(1769564027.266:1194): pid=7496 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:47.373691 kernel: audit: type=1103 audit(1769564027.289:1195): pid=7501 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:47.966671 sshd[7501]: Connection closed by 10.0.0.1 port 46290 Jan 28 01:33:47.970363 sshd-session[7496]: pam_unix(sshd:session): session closed for user core Jan 28 01:33:47.987000 audit[7496]: USER_END pid=7496 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:48.027283 kernel: audit: type=1106 audit(1769564027.987:1196): pid=7496 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:48.027374 kernel: audit: type=1104 audit(1769564027.988:1197): pid=7496 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:47.988000 audit[7496]: CRED_DISP pid=7496 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:48.022950 systemd[1]: sshd@53-10.0.0.61:22-10.0.0.1:46290.service: Deactivated successfully. Jan 28 01:33:48.045929 systemd[1]: session-55.scope: Deactivated successfully. Jan 28 01:33:48.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@53-10.0.0.61:22-10.0.0.1:46290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:48.086866 systemd-logind[1590]: Session 55 logged out. Waiting for processes to exit. Jan 28 01:33:48.097881 systemd-logind[1590]: Removed session 55. Jan 28 01:33:52.062214 kubelet[2995]: E0128 01:33:52.061482 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:33:52.063495 kubelet[2995]: E0128 01:33:52.063457 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:33:52.064702 kubelet[2995]: E0128 01:33:52.064636 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:33:52.981923 systemd[1]: Started sshd@54-10.0.0.61:22-10.0.0.1:36640.service - OpenSSH per-connection server daemon (10.0.0.1:36640). Jan 28 01:33:53.010324 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:33:53.010469 kernel: audit: type=1130 audit(1769564032.981:1199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@54-10.0.0.61:22-10.0.0.1:36640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:52.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@54-10.0.0.61:22-10.0.0.1:36640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:53.097177 kubelet[2995]: E0128 01:33:53.096788 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:33:53.150000 audit[7540]: USER_ACCT pid=7540 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:53.154364 sshd[7540]: Accepted publickey for core from 10.0.0.1 port 36640 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:33:53.160279 sshd-session[7540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:33:53.152000 audit[7540]: CRED_ACQ pid=7540 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:53.183232 systemd-logind[1590]: New session 56 of user core. Jan 28 01:33:53.198796 kernel: audit: type=1101 audit(1769564033.150:1200): pid=7540 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:53.199222 kernel: audit: type=1103 audit(1769564033.152:1201): pid=7540 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:53.199332 kernel: audit: type=1006 audit(1769564033.152:1202): pid=7540 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=56 res=1 Jan 28 01:33:53.152000 audit[7540]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5b1288d0 a2=3 a3=0 items=0 ppid=1 pid=7540 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=56 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:33:53.270725 kernel: audit: type=1300 audit(1769564033.152:1202): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5b1288d0 a2=3 a3=0 items=0 ppid=1 pid=7540 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=56 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:33:53.271201 systemd[1]: Started session-56.scope - Session 56 of User core. Jan 28 01:33:53.152000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:33:53.286200 kernel: audit: type=1327 audit(1769564033.152:1202): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:33:53.330418 kernel: audit: type=1105 audit(1769564033.289:1203): pid=7540 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:53.289000 audit[7540]: USER_START pid=7540 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:53.328000 audit[7546]: CRED_ACQ pid=7546 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:53.364341 kernel: audit: type=1103 audit(1769564033.328:1204): pid=7546 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:53.713672 sshd[7546]: Connection closed by 10.0.0.1 port 36640 Jan 28 01:33:53.715772 sshd-session[7540]: pam_unix(sshd:session): session closed for user core Jan 28 01:33:53.723000 audit[7540]: USER_END pid=7540 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:53.738271 systemd[1]: sshd@54-10.0.0.61:22-10.0.0.1:36640.service: Deactivated successfully. Jan 28 01:33:53.755775 systemd[1]: session-56.scope: Deactivated successfully. Jan 28 01:33:53.724000 audit[7540]: CRED_DISP pid=7540 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:53.774301 systemd-logind[1590]: Session 56 logged out. Waiting for processes to exit. Jan 28 01:33:53.783614 systemd-logind[1590]: Removed session 56. Jan 28 01:33:53.793209 kernel: audit: type=1106 audit(1769564033.723:1205): pid=7540 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:53.793381 kernel: audit: type=1104 audit(1769564033.724:1206): pid=7540 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:53.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@54-10.0.0.61:22-10.0.0.1:36640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:55.069300 kubelet[2995]: E0128 01:33:55.068750 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:33:58.083650 kubelet[2995]: E0128 01:33:58.083343 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:33:58.796504 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:33:58.796678 kernel: audit: type=1130 audit(1769564038.757:1208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@55-10.0.0.61:22-10.0.0.1:36652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:58.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@55-10.0.0.61:22-10.0.0.1:36652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:33:58.758686 systemd[1]: Started sshd@55-10.0.0.61:22-10.0.0.1:36652.service - OpenSSH per-connection server daemon (10.0.0.1:36652). Jan 28 01:33:59.022000 audit[7573]: USER_ACCT pid=7573 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:59.024589 sshd[7573]: Accepted publickey for core from 10.0.0.1 port 36652 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:33:59.036162 sshd-session[7573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:33:59.030000 audit[7573]: CRED_ACQ pid=7573 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:59.071275 kernel: audit: type=1101 audit(1769564039.022:1209): pid=7573 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:59.071407 kernel: audit: type=1103 audit(1769564039.030:1210): pid=7573 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:59.071463 kernel: audit: type=1006 audit(1769564039.030:1211): pid=7573 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=57 res=1 Jan 28 01:33:59.076808 systemd-logind[1590]: New session 57 of user core. Jan 28 01:33:59.030000 audit[7573]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd38f6dba0 a2=3 a3=0 items=0 ppid=1 pid=7573 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=57 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:33:59.030000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:33:59.112913 kernel: audit: type=1300 audit(1769564039.030:1211): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd38f6dba0 a2=3 a3=0 items=0 ppid=1 pid=7573 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=57 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:33:59.113185 kernel: audit: type=1327 audit(1769564039.030:1211): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:33:59.113546 systemd[1]: Started session-57.scope - Session 57 of User core. Jan 28 01:33:59.151000 audit[7573]: USER_START pid=7573 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:59.222717 kernel: audit: type=1105 audit(1769564039.151:1212): pid=7573 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:59.222869 kernel: audit: type=1103 audit(1769564039.165:1213): pid=7577 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:59.165000 audit[7577]: CRED_ACQ pid=7577 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:59.465141 sshd[7577]: Connection closed by 10.0.0.1 port 36652 Jan 28 01:33:59.467285 sshd-session[7573]: pam_unix(sshd:session): session closed for user core Jan 28 01:33:59.471000 audit[7573]: USER_END pid=7573 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:59.483854 systemd[1]: sshd@55-10.0.0.61:22-10.0.0.1:36652.service: Deactivated successfully. Jan 28 01:33:59.494805 systemd[1]: session-57.scope: Deactivated successfully. Jan 28 01:33:59.501696 systemd-logind[1590]: Session 57 logged out. Waiting for processes to exit. Jan 28 01:33:59.504853 kernel: audit: type=1106 audit(1769564039.471:1214): pid=7573 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:59.504932 kernel: audit: type=1104 audit(1769564039.474:1215): pid=7573 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:59.474000 audit[7573]: CRED_DISP pid=7573 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:33:59.508626 systemd-logind[1590]: Removed session 57. Jan 28 01:33:59.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@55-10.0.0.61:22-10.0.0.1:36652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:00.066236 kubelet[2995]: E0128 01:34:00.065640 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:34:01.077156 kubelet[2995]: E0128 01:34:01.076487 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:03.071727 kubelet[2995]: E0128 01:34:03.070959 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:34:04.086239 kubelet[2995]: E0128 01:34:04.085926 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:34:04.555792 systemd[1]: Started sshd@56-10.0.0.61:22-10.0.0.1:44276.service - OpenSSH per-connection server daemon (10.0.0.1:44276). Jan 28 01:34:04.631299 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:34:04.631493 kernel: audit: type=1130 audit(1769564044.550:1217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@56-10.0.0.61:22-10.0.0.1:44276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:04.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@56-10.0.0.61:22-10.0.0.1:44276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:05.314980 sshd[7599]: Accepted publickey for core from 10.0.0.1 port 44276 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:34:05.312000 audit[7599]: USER_ACCT pid=7599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:05.328767 sshd-session[7599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:34:05.324000 audit[7599]: CRED_ACQ pid=7599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:05.409371 systemd-logind[1590]: New session 58 of user core. Jan 28 01:34:05.476228 kernel: audit: type=1101 audit(1769564045.312:1218): pid=7599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:05.476494 kernel: audit: type=1103 audit(1769564045.324:1219): pid=7599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:05.486295 kernel: audit: type=1006 audit(1769564045.324:1220): pid=7599 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=58 res=1 Jan 28 01:34:05.533633 kernel: audit: type=1300 audit(1769564045.324:1220): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3be29c40 a2=3 a3=0 items=0 ppid=1 pid=7599 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=58 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:05.324000 audit[7599]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3be29c40 a2=3 a3=0 items=0 ppid=1 pid=7599 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=58 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:05.324000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:34:05.603507 systemd[1]: Started session-58.scope - Session 58 of User core. Jan 28 01:34:05.641214 kernel: audit: type=1327 audit(1769564045.324:1220): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:34:05.647000 audit[7599]: USER_START pid=7599 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:05.676000 audit[7603]: CRED_ACQ pid=7603 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:05.739989 kernel: audit: type=1105 audit(1769564045.647:1221): pid=7599 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:05.740261 kernel: audit: type=1103 audit(1769564045.676:1222): pid=7603 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:06.173801 kubelet[2995]: E0128 01:34:06.173600 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:34:06.520272 sshd[7603]: Connection closed by 10.0.0.1 port 44276 Jan 28 01:34:06.520935 sshd-session[7599]: pam_unix(sshd:session): session closed for user core Jan 28 01:34:06.528000 audit[7599]: USER_END pid=7599 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:06.548990 systemd[1]: sshd@56-10.0.0.61:22-10.0.0.1:44276.service: Deactivated successfully. Jan 28 01:34:06.568350 systemd[1]: session-58.scope: Deactivated successfully. Jan 28 01:34:06.582899 systemd-logind[1590]: Session 58 logged out. Waiting for processes to exit. Jan 28 01:34:06.595730 systemd-logind[1590]: Removed session 58. Jan 28 01:34:06.600457 kernel: audit: type=1106 audit(1769564046.528:1223): pid=7599 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:06.528000 audit[7599]: CRED_DISP pid=7599 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:06.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@56-10.0.0.61:22-10.0.0.1:44276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:06.657450 kernel: audit: type=1104 audit(1769564046.528:1224): pid=7599 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:09.145380 kubelet[2995]: E0128 01:34:09.063239 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:34:11.092188 kubelet[2995]: E0128 01:34:11.089301 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:34:11.572895 systemd[1]: Started sshd@57-10.0.0.61:22-10.0.0.1:44280.service - OpenSSH per-connection server daemon (10.0.0.1:44280). Jan 28 01:34:11.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@57-10.0.0.61:22-10.0.0.1:44280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:11.623308 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:34:11.623482 kernel: audit: type=1130 audit(1769564051.603:1226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@57-10.0.0.61:22-10.0.0.1:44280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:12.019000 audit[7617]: USER_ACCT pid=7617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:12.051619 sshd-session[7617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:34:12.063542 sshd[7617]: Accepted publickey for core from 10.0.0.1 port 44280 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:34:12.063804 kubelet[2995]: E0128 01:34:12.062936 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:12.079318 kubelet[2995]: E0128 01:34:12.072412 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:34:12.115877 kernel: audit: type=1101 audit(1769564052.019:1227): pid=7617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:12.029000 audit[7617]: CRED_ACQ pid=7617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:12.164302 systemd-logind[1590]: New session 59 of user core. Jan 28 01:34:12.276761 kernel: audit: type=1103 audit(1769564052.029:1228): pid=7617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:12.276950 kernel: audit: type=1006 audit(1769564052.029:1229): pid=7617 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=59 res=1 Jan 28 01:34:12.029000 audit[7617]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd521afe80 a2=3 a3=0 items=0 ppid=1 pid=7617 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=59 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:12.280644 systemd[1]: Started session-59.scope - Session 59 of User core. Jan 28 01:34:12.029000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:34:12.425528 kernel: audit: type=1300 audit(1769564052.029:1229): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd521afe80 a2=3 a3=0 items=0 ppid=1 pid=7617 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=59 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:12.425701 kernel: audit: type=1327 audit(1769564052.029:1229): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:34:12.372000 audit[7617]: USER_START pid=7617 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:12.406000 audit[7621]: CRED_ACQ pid=7621 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:12.614443 kernel: audit: type=1105 audit(1769564052.372:1230): pid=7617 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:12.614607 kernel: audit: type=1103 audit(1769564052.406:1231): pid=7621 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:13.228832 sshd[7621]: Connection closed by 10.0.0.1 port 44280 Jan 28 01:34:13.229897 sshd-session[7617]: pam_unix(sshd:session): session closed for user core Jan 28 01:34:13.234000 audit[7617]: USER_END pid=7617 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:13.298116 kernel: audit: type=1106 audit(1769564053.234:1232): pid=7617 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:13.300952 systemd-logind[1590]: Session 59 logged out. Waiting for processes to exit. Jan 28 01:34:13.304432 systemd[1]: sshd@57-10.0.0.61:22-10.0.0.1:44280.service: Deactivated successfully. Jan 28 01:34:13.261000 audit[7617]: CRED_DISP pid=7617 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:13.364841 systemd[1]: session-59.scope: Deactivated successfully. Jan 28 01:34:13.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@57-10.0.0.61:22-10.0.0.1:44280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:13.369237 kernel: audit: type=1104 audit(1769564053.261:1233): pid=7617 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:13.368993 systemd-logind[1590]: Removed session 59. Jan 28 01:34:16.082515 kubelet[2995]: E0128 01:34:16.078815 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:34:18.102362 kubelet[2995]: E0128 01:34:18.093937 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:34:18.329199 systemd[1]: Started sshd@58-10.0.0.61:22-10.0.0.1:49014.service - OpenSSH per-connection server daemon (10.0.0.1:49014). Jan 28 01:34:18.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@58-10.0.0.61:22-10.0.0.1:49014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:18.422624 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:34:18.422777 kernel: audit: type=1130 audit(1769564058.328:1235): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@58-10.0.0.61:22-10.0.0.1:49014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:18.871609 sshd[7634]: Accepted publickey for core from 10.0.0.1 port 49014 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:34:18.869000 audit[7634]: USER_ACCT pid=7634 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:18.890820 sshd-session[7634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:34:18.925412 kernel: audit: type=1101 audit(1769564058.869:1236): pid=7634 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:18.875000 audit[7634]: CRED_ACQ pid=7634 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:18.998332 kernel: audit: type=1103 audit(1769564058.875:1237): pid=7634 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:18.998473 kernel: audit: type=1006 audit(1769564058.881:1238): pid=7634 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=60 res=1 Jan 28 01:34:18.983686 systemd-logind[1590]: New session 60 of user core. Jan 28 01:34:18.881000 audit[7634]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6d004100 a2=3 a3=0 items=0 ppid=1 pid=7634 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=60 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:19.064369 kernel: audit: type=1300 audit(1769564058.881:1238): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6d004100 a2=3 a3=0 items=0 ppid=1 pid=7634 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=60 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:18.881000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:34:19.093936 kernel: audit: type=1327 audit(1769564058.881:1238): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:34:19.092701 systemd[1]: Started session-60.scope - Session 60 of User core. Jan 28 01:34:19.124000 audit[7634]: USER_START pid=7634 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:19.175602 kernel: audit: type=1105 audit(1769564059.124:1239): pid=7634 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:19.157000 audit[7638]: CRED_ACQ pid=7638 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:19.246709 kernel: audit: type=1103 audit(1769564059.157:1240): pid=7638 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:20.279375 sshd[7638]: Connection closed by 10.0.0.1 port 49014 Jan 28 01:34:20.282737 sshd-session[7634]: pam_unix(sshd:session): session closed for user core Jan 28 01:34:20.331000 audit[7634]: USER_END pid=7634 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:20.356318 systemd[1]: sshd@58-10.0.0.61:22-10.0.0.1:49014.service: Deactivated successfully. Jan 28 01:34:20.381961 kernel: audit: type=1106 audit(1769564060.331:1241): pid=7634 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:20.382298 kernel: audit: type=1104 audit(1769564060.331:1242): pid=7634 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:20.331000 audit[7634]: CRED_DISP pid=7634 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:20.390622 systemd[1]: session-60.scope: Deactivated successfully. Jan 28 01:34:20.405483 systemd-logind[1590]: Session 60 logged out. Waiting for processes to exit. Jan 28 01:34:20.407833 systemd-logind[1590]: Removed session 60. Jan 28 01:34:20.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@58-10.0.0.61:22-10.0.0.1:49014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:21.080438 kubelet[2995]: E0128 01:34:21.077504 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:34:22.120806 kubelet[2995]: E0128 01:34:22.120680 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:34:24.119309 kubelet[2995]: E0128 01:34:24.116302 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:34:24.140555 kubelet[2995]: E0128 01:34:24.122731 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:24.173315 kubelet[2995]: E0128 01:34:24.170669 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:34:25.465627 systemd[1]: Started sshd@59-10.0.0.61:22-10.0.0.1:37738.service - OpenSSH per-connection server daemon (10.0.0.1:37738). Jan 28 01:34:25.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@59-10.0.0.61:22-10.0.0.1:37738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:25.497698 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:34:25.497816 kernel: audit: type=1130 audit(1769564065.460:1244): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@59-10.0.0.61:22-10.0.0.1:37738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:26.091000 audit[7677]: USER_ACCT pid=7677 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:26.109547 sshd[7677]: Accepted publickey for core from 10.0.0.1 port 37738 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:34:26.123331 kernel: audit: type=1101 audit(1769564066.091:1245): pid=7677 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:26.131548 sshd-session[7677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:34:26.127000 audit[7677]: CRED_ACQ pid=7677 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:26.190744 systemd-logind[1590]: New session 61 of user core. Jan 28 01:34:26.279489 kernel: audit: type=1103 audit(1769564066.127:1246): pid=7677 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:26.279709 kernel: audit: type=1006 audit(1769564066.127:1247): pid=7677 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=61 res=1 Jan 28 01:34:26.307831 kernel: audit: type=1300 audit(1769564066.127:1247): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe631781c0 a2=3 a3=0 items=0 ppid=1 pid=7677 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=61 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:26.127000 audit[7677]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe631781c0 a2=3 a3=0 items=0 ppid=1 pid=7677 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=61 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:26.308447 systemd[1]: Started session-61.scope - Session 61 of User core. Jan 28 01:34:26.127000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:34:26.391668 kernel: audit: type=1327 audit(1769564066.127:1247): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:34:26.338000 audit[7677]: USER_START pid=7677 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:26.425572 kernel: audit: type=1105 audit(1769564066.338:1248): pid=7677 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:26.425702 kernel: audit: type=1103 audit(1769564066.359:1249): pid=7681 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:26.359000 audit[7681]: CRED_ACQ pid=7681 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:27.084472 sshd[7681]: Connection closed by 10.0.0.1 port 37738 Jan 28 01:34:27.085670 kubelet[2995]: E0128 01:34:27.074854 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:34:27.090325 sshd-session[7677]: pam_unix(sshd:session): session closed for user core Jan 28 01:34:27.104000 audit[7677]: USER_END pid=7677 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:27.160537 kernel: audit: type=1106 audit(1769564067.104:1250): pid=7677 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:27.163624 systemd[1]: sshd@59-10.0.0.61:22-10.0.0.1:37738.service: Deactivated successfully. Jan 28 01:34:27.171302 systemd[1]: session-61.scope: Deactivated successfully. Jan 28 01:34:27.104000 audit[7677]: CRED_DISP pid=7677 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:27.187866 systemd-logind[1590]: Session 61 logged out. Waiting for processes to exit. Jan 28 01:34:27.206690 systemd-logind[1590]: Removed session 61. Jan 28 01:34:27.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@59-10.0.0.61:22-10.0.0.1:37738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:27.244326 kernel: audit: type=1104 audit(1769564067.104:1251): pid=7677 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:31.117967 kubelet[2995]: E0128 01:34:31.117511 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:34:32.110225 kubelet[2995]: E0128 01:34:32.106809 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:34:32.184844 systemd[1]: Started sshd@60-10.0.0.61:22-10.0.0.1:37742.service - OpenSSH per-connection server daemon (10.0.0.1:37742). Jan 28 01:34:32.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@60-10.0.0.61:22-10.0.0.1:37742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:32.220319 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:34:32.220507 kernel: audit: type=1130 audit(1769564072.185:1253): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@60-10.0.0.61:22-10.0.0.1:37742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:32.697000 audit[7696]: USER_ACCT pid=7696 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:32.708878 sshd[7696]: Accepted publickey for core from 10.0.0.1 port 37742 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:34:32.742508 sshd-session[7696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:34:32.761329 kernel: audit: type=1101 audit(1769564072.697:1254): pid=7696 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:32.761411 kernel: audit: type=1103 audit(1769564072.731:1255): pid=7696 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:32.731000 audit[7696]: CRED_ACQ pid=7696 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:32.889303 kernel: audit: type=1006 audit(1769564072.731:1256): pid=7696 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=62 res=1 Jan 28 01:34:32.881843 systemd-logind[1590]: New session 62 of user core. Jan 28 01:34:32.731000 audit[7696]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeffeeab50 a2=3 a3=0 items=0 ppid=1 pid=7696 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=62 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:32.963292 kernel: audit: type=1300 audit(1769564072.731:1256): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeffeeab50 a2=3 a3=0 items=0 ppid=1 pid=7696 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=62 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:32.731000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:34:32.982883 systemd[1]: Started session-62.scope - Session 62 of User core. Jan 28 01:34:32.997258 kernel: audit: type=1327 audit(1769564072.731:1256): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:34:32.997000 audit[7696]: USER_START pid=7696 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:33.010000 audit[7700]: CRED_ACQ pid=7700 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:33.102490 kubelet[2995]: E0128 01:34:33.095666 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:33.103448 kubelet[2995]: E0128 01:34:33.086972 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:34:33.164414 kernel: audit: type=1105 audit(1769564072.997:1257): pid=7696 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:33.164554 kernel: audit: type=1103 audit(1769564073.010:1258): pid=7700 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:34.108786 sshd[7700]: Connection closed by 10.0.0.1 port 37742 Jan 28 01:34:34.111824 sshd-session[7696]: pam_unix(sshd:session): session closed for user core Jan 28 01:34:34.113000 audit[7696]: USER_END pid=7696 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:34.150706 kernel: audit: type=1106 audit(1769564074.113:1259): pid=7696 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:34.181850 systemd[1]: sshd@60-10.0.0.61:22-10.0.0.1:37742.service: Deactivated successfully. Jan 28 01:34:34.193783 systemd[1]: session-62.scope: Deactivated successfully. Jan 28 01:34:34.113000 audit[7696]: CRED_DISP pid=7696 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:34.264814 kernel: audit: type=1104 audit(1769564074.113:1260): pid=7696 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:34.215943 systemd-logind[1590]: Session 62 logged out. Waiting for processes to exit. Jan 28 01:34:34.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@60-10.0.0.61:22-10.0.0.1:37742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:34.220920 systemd-logind[1590]: Removed session 62. Jan 28 01:34:35.111296 kubelet[2995]: E0128 01:34:35.096472 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:34:39.082368 kubelet[2995]: E0128 01:34:39.082103 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:34:39.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@61-10.0.0.61:22-10.0.0.1:56420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:39.173212 systemd[1]: Started sshd@61-10.0.0.61:22-10.0.0.1:56420.service - OpenSSH per-connection server daemon (10.0.0.1:56420). Jan 28 01:34:39.180706 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:34:39.180988 kernel: audit: type=1130 audit(1769564079.172:1262): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@61-10.0.0.61:22-10.0.0.1:56420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:39.548514 sshd[7722]: Accepted publickey for core from 10.0.0.1 port 56420 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:34:39.583096 kernel: audit: type=1101 audit(1769564079.543:1263): pid=7722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:39.543000 audit[7722]: USER_ACCT pid=7722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:39.610000 audit[7722]: CRED_ACQ pid=7722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:39.630298 sshd-session[7722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:34:39.693193 kernel: audit: type=1103 audit(1769564079.610:1264): pid=7722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:39.693311 kernel: audit: type=1006 audit(1769564079.610:1265): pid=7722 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=63 res=1 Jan 28 01:34:39.610000 audit[7722]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4cbb7f10 a2=3 a3=0 items=0 ppid=1 pid=7722 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=63 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:39.712276 systemd-logind[1590]: New session 63 of user core. Jan 28 01:34:39.779977 kernel: audit: type=1300 audit(1769564079.610:1265): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4cbb7f10 a2=3 a3=0 items=0 ppid=1 pid=7722 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=63 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:39.780294 kernel: audit: type=1327 audit(1769564079.610:1265): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:34:39.610000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:34:40.033913 systemd[1]: Started session-63.scope - Session 63 of User core. Jan 28 01:34:40.577000 audit[7722]: USER_START pid=7722 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:40.683164 kernel: audit: type=1105 audit(1769564080.577:1266): pid=7722 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:40.778434 kernel: audit: type=1103 audit(1769564080.602:1267): pid=7726 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:40.602000 audit[7726]: CRED_ACQ pid=7726 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:41.211400 sshd[7726]: Connection closed by 10.0.0.1 port 56420 Jan 28 01:34:41.218390 sshd-session[7722]: pam_unix(sshd:session): session closed for user core Jan 28 01:34:41.270724 systemd[1]: sshd@61-10.0.0.61:22-10.0.0.1:56420.service: Deactivated successfully. Jan 28 01:34:41.241000 audit[7722]: USER_END pid=7722 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:41.290402 systemd[1]: session-63.scope: Deactivated successfully. Jan 28 01:34:41.314987 systemd-logind[1590]: Session 63 logged out. Waiting for processes to exit. Jan 28 01:34:41.335927 systemd-logind[1590]: Removed session 63. Jan 28 01:34:41.393712 kernel: audit: type=1106 audit(1769564081.241:1268): pid=7722 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:41.393864 kernel: audit: type=1104 audit(1769564081.241:1269): pid=7722 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:41.241000 audit[7722]: CRED_DISP pid=7722 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:41.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@61-10.0.0.61:22-10.0.0.1:56420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:42.079462 kubelet[2995]: E0128 01:34:42.076506 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:34:45.162384 kubelet[2995]: E0128 01:34:45.112389 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:34:46.066714 kubelet[2995]: E0128 01:34:46.066632 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:34:46.311215 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:34:46.311376 kernel: audit: type=1130 audit(1769564086.292:1271): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@62-10.0.0.61:22-10.0.0.1:35802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:46.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@62-10.0.0.61:22-10.0.0.1:35802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:46.293427 systemd[1]: Started sshd@62-10.0.0.61:22-10.0.0.1:35802.service - OpenSSH per-connection server daemon (10.0.0.1:35802). Jan 28 01:34:46.703779 sshd[7740]: Accepted publickey for core from 10.0.0.1 port 35802 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:34:46.708726 sshd-session[7740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:34:46.697000 audit[7740]: USER_ACCT pid=7740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:46.805513 kernel: audit: type=1101 audit(1769564086.697:1272): pid=7740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:46.705000 audit[7740]: CRED_ACQ pid=7740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:46.874588 kernel: audit: type=1103 audit(1769564086.705:1273): pid=7740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:46.914391 kernel: audit: type=1006 audit(1769564086.705:1274): pid=7740 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=64 res=1 Jan 28 01:34:46.921810 systemd-logind[1590]: New session 64 of user core. Jan 28 01:34:46.705000 audit[7740]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8e95d470 a2=3 a3=0 items=0 ppid=1 pid=7740 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=64 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:46.983837 kernel: audit: type=1300 audit(1769564086.705:1274): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8e95d470 a2=3 a3=0 items=0 ppid=1 pid=7740 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=64 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:46.928421 systemd[1]: Started session-64.scope - Session 64 of User core. Jan 28 01:34:47.158481 kernel: audit: type=1327 audit(1769564086.705:1274): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:34:47.158660 kernel: audit: type=1105 audit(1769564086.993:1275): pid=7740 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:46.705000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:34:46.993000 audit[7740]: USER_START pid=7740 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:47.034000 audit[7744]: CRED_ACQ pid=7744 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:47.277728 kernel: audit: type=1103 audit(1769564087.034:1276): pid=7744 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:47.318114 kubelet[2995]: E0128 01:34:47.317245 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:34:48.073917 kubelet[2995]: E0128 01:34:48.072265 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:48.220121 sshd[7744]: Connection closed by 10.0.0.1 port 35802 Jan 28 01:34:48.269000 audit[7740]: USER_END pid=7740 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:48.264316 sshd-session[7740]: pam_unix(sshd:session): session closed for user core Jan 28 01:34:48.379272 kernel: audit: type=1106 audit(1769564088.269:1277): pid=7740 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:48.270000 audit[7740]: CRED_DISP pid=7740 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:48.466670 systemd[1]: sshd@62-10.0.0.61:22-10.0.0.1:35802.service: Deactivated successfully. Jan 28 01:34:48.493767 kernel: audit: type=1104 audit(1769564088.270:1278): pid=7740 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:48.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@62-10.0.0.61:22-10.0.0.1:35802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:48.521453 systemd[1]: session-64.scope: Deactivated successfully. Jan 28 01:34:48.560677 systemd-logind[1590]: Session 64 logged out. Waiting for processes to exit. Jan 28 01:34:48.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@63-10.0.0.61:22-10.0.0.1:35816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:48.592462 systemd[1]: Started sshd@63-10.0.0.61:22-10.0.0.1:35816.service - OpenSSH per-connection server daemon (10.0.0.1:35816). Jan 28 01:34:48.629330 systemd-logind[1590]: Removed session 64. Jan 28 01:34:48.864585 sshd[7758]: Accepted publickey for core from 10.0.0.1 port 35816 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:34:48.863000 audit[7758]: USER_ACCT pid=7758 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:48.865000 audit[7758]: CRED_ACQ pid=7758 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:48.867000 audit[7758]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3ceb4480 a2=3 a3=0 items=0 ppid=1 pid=7758 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=65 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:48.867000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:34:48.870821 sshd-session[7758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:34:48.888760 systemd-logind[1590]: New session 65 of user core. Jan 28 01:34:48.899436 systemd[1]: Started session-65.scope - Session 65 of User core. Jan 28 01:34:48.908000 audit[7758]: USER_START pid=7758 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:48.913000 audit[7763]: CRED_ACQ pid=7763 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:49.988449 sshd[7763]: Connection closed by 10.0.0.1 port 35816 Jan 28 01:34:49.991313 sshd-session[7758]: pam_unix(sshd:session): session closed for user core Jan 28 01:34:49.996000 audit[7758]: USER_END pid=7758 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:49.996000 audit[7758]: CRED_DISP pid=7758 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:50.049095 systemd[1]: sshd@63-10.0.0.61:22-10.0.0.1:35816.service: Deactivated successfully. Jan 28 01:34:50.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@63-10.0.0.61:22-10.0.0.1:35816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:50.069356 kubelet[2995]: E0128 01:34:50.068615 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:34:50.072858 systemd[1]: session-65.scope: Deactivated successfully. Jan 28 01:34:50.080688 systemd-logind[1590]: Session 65 logged out. Waiting for processes to exit. Jan 28 01:34:50.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@64-10.0.0.61:22-10.0.0.1:35818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:50.108543 systemd[1]: Started sshd@64-10.0.0.61:22-10.0.0.1:35818.service - OpenSSH per-connection server daemon (10.0.0.1:35818). Jan 28 01:34:50.111873 systemd-logind[1590]: Removed session 65. Jan 28 01:34:50.655000 audit[7801]: USER_ACCT pid=7801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:50.674845 sshd[7801]: Accepted publickey for core from 10.0.0.1 port 35818 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:34:50.689000 audit[7801]: CRED_ACQ pid=7801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:50.689000 audit[7801]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb507a0e0 a2=3 a3=0 items=0 ppid=1 pid=7801 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=66 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:50.689000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:34:50.708821 sshd-session[7801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:34:50.795286 systemd-logind[1590]: New session 66 of user core. Jan 28 01:34:50.820834 systemd[1]: Started session-66.scope - Session 66 of User core. Jan 28 01:34:50.874000 audit[7801]: USER_START pid=7801 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:50.895000 audit[7806]: CRED_ACQ pid=7806 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:53.087247 kubelet[2995]: E0128 01:34:53.086402 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:54.115327 sshd[7806]: Connection closed by 10.0.0.1 port 35818 Jan 28 01:34:54.122204 sshd-session[7801]: pam_unix(sshd:session): session closed for user core Jan 28 01:34:54.232460 kernel: kauditd_printk_skb: 20 callbacks suppressed Jan 28 01:34:54.232612 kernel: audit: type=1106 audit(1769564094.159:1295): pid=7801 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:54.159000 audit[7801]: USER_END pid=7801 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:54.160000 audit[7801]: CRED_DISP pid=7801 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:54.273423 kernel: audit: type=1104 audit(1769564094.160:1296): pid=7801 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:54.284000 audit[7823]: NETFILTER_CFG table=filter:143 family=2 entries=26 op=nft_register_rule pid=7823 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:34:54.300372 kernel: audit: type=1325 audit(1769564094.284:1297): table=filter:143 family=2 entries=26 op=nft_register_rule pid=7823 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:34:54.284000 audit[7823]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe336e9590 a2=0 a3=7ffe336e957c items=0 ppid=3100 pid=7823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:54.329443 kubelet[2995]: E0128 01:34:54.309623 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:34:54.301630 systemd[1]: sshd@64-10.0.0.61:22-10.0.0.1:35818.service: Deactivated successfully. Jan 28 01:34:54.363719 kernel: audit: type=1300 audit(1769564094.284:1297): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe336e9590 a2=0 a3=7ffe336e957c items=0 ppid=3100 pid=7823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:54.311629 systemd[1]: session-66.scope: Deactivated successfully. Jan 28 01:34:54.317360 systemd-logind[1590]: Session 66 logged out. Waiting for processes to exit. Jan 28 01:34:54.339358 systemd[1]: Started sshd@65-10.0.0.61:22-10.0.0.1:37962.service - OpenSSH per-connection server daemon (10.0.0.1:37962). Jan 28 01:34:54.342797 systemd-logind[1590]: Removed session 66. Jan 28 01:34:54.284000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:34:54.418546 kernel: audit: type=1327 audit(1769564094.284:1297): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:34:54.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@64-10.0.0.61:22-10.0.0.1:35818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:54.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@65-10.0.0.61:22-10.0.0.1:37962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:54.518381 kernel: audit: type=1131 audit(1769564094.305:1298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@64-10.0.0.61:22-10.0.0.1:35818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:54.518518 kernel: audit: type=1130 audit(1769564094.334:1299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@65-10.0.0.61:22-10.0.0.1:37962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:54.518561 kernel: audit: type=1325 audit(1769564094.348:1300): table=nat:144 family=2 entries=20 op=nft_register_rule pid=7823 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:34:54.348000 audit[7823]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=7823 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:34:54.348000 audit[7823]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe336e9590 a2=0 a3=0 items=0 ppid=3100 pid=7823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:54.627884 kernel: audit: type=1300 audit(1769564094.348:1300): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe336e9590 a2=0 a3=0 items=0 ppid=3100 pid=7823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:54.348000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:34:54.661262 kernel: audit: type=1327 audit(1769564094.348:1300): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:34:54.755000 audit[7829]: NETFILTER_CFG table=filter:145 family=2 entries=38 op=nft_register_rule pid=7829 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:34:54.755000 audit[7829]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd2c643be0 a2=0 a3=7ffd2c643bcc items=0 ppid=3100 pid=7829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:54.755000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:34:54.771000 audit[7829]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=7829 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:34:54.771000 audit[7829]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd2c643be0 a2=0 a3=0 items=0 ppid=3100 pid=7829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:54.771000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:34:54.840000 audit[7827]: USER_ACCT pid=7827 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:54.849370 sshd[7827]: Accepted publickey for core from 10.0.0.1 port 37962 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:34:54.851655 sshd-session[7827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:34:54.846000 audit[7827]: CRED_ACQ pid=7827 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:54.846000 audit[7827]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb2eaaff0 a2=3 a3=0 items=0 ppid=1 pid=7827 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=67 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:54.846000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:34:54.870993 systemd-logind[1590]: New session 67 of user core. Jan 28 01:34:54.887854 systemd[1]: Started session-67.scope - Session 67 of User core. Jan 28 01:34:54.903000 audit[7827]: USER_START pid=7827 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:54.913000 audit[7833]: CRED_ACQ pid=7833 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:55.075749 kubelet[2995]: E0128 01:34:55.070431 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:34:56.617225 sshd[7833]: Connection closed by 10.0.0.1 port 37962 Jan 28 01:34:56.619690 sshd-session[7827]: pam_unix(sshd:session): session closed for user core Jan 28 01:34:56.653617 systemd[1]: Started sshd@66-10.0.0.61:22-10.0.0.1:37978.service - OpenSSH per-connection server daemon (10.0.0.1:37978). Jan 28 01:34:56.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@66-10.0.0.61:22-10.0.0.1:37978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:56.728000 audit[7827]: USER_END pid=7827 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:56.728000 audit[7827]: CRED_DISP pid=7827 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:56.751630 systemd[1]: sshd@65-10.0.0.61:22-10.0.0.1:37962.service: Deactivated successfully. Jan 28 01:34:56.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@65-10.0.0.61:22-10.0.0.1:37962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:56.775961 systemd[1]: session-67.scope: Deactivated successfully. Jan 28 01:34:56.792319 systemd-logind[1590]: Session 67 logged out. Waiting for processes to exit. Jan 28 01:34:56.832112 systemd-logind[1590]: Removed session 67. Jan 28 01:34:57.078000 audit[7841]: USER_ACCT pid=7841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:57.102096 sshd[7841]: Accepted publickey for core from 10.0.0.1 port 37978 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:34:57.101000 audit[7841]: CRED_ACQ pid=7841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:57.101000 audit[7841]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8fec73e0 a2=3 a3=0 items=0 ppid=1 pid=7841 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=68 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:34:57.101000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:34:57.124719 sshd-session[7841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:34:57.264513 systemd-logind[1590]: New session 68 of user core. Jan 28 01:34:57.267504 systemd[1]: Started session-68.scope - Session 68 of User core. Jan 28 01:34:57.285000 audit[7841]: USER_START pid=7841 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:57.297000 audit[7848]: CRED_ACQ pid=7848 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:58.134539 kubelet[2995]: E0128 01:34:58.127492 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:34:58.289520 sshd[7848]: Connection closed by 10.0.0.1 port 37978 Jan 28 01:34:58.299398 sshd-session[7841]: pam_unix(sshd:session): session closed for user core Jan 28 01:34:58.334000 audit[7841]: USER_END pid=7841 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:58.334000 audit[7841]: CRED_DISP pid=7841 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:34:58.360777 systemd[1]: sshd@66-10.0.0.61:22-10.0.0.1:37978.service: Deactivated successfully. Jan 28 01:34:58.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@66-10.0.0.61:22-10.0.0.1:37978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:34:58.382901 systemd[1]: session-68.scope: Deactivated successfully. Jan 28 01:34:58.391966 systemd-logind[1590]: Session 68 logged out. Waiting for processes to exit. Jan 28 01:34:58.428708 systemd-logind[1590]: Removed session 68. Jan 28 01:34:59.071361 kubelet[2995]: E0128 01:34:59.068288 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:35:02.077468 kubelet[2995]: E0128 01:35:02.073373 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:35:03.074869 kubelet[2995]: E0128 01:35:03.071473 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:35:03.377142 kernel: kauditd_printk_skb: 27 callbacks suppressed Jan 28 01:35:03.377367 kernel: audit: type=1130 audit(1769564103.363:1320): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@67-10.0.0.61:22-10.0.0.1:53666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:03.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@67-10.0.0.61:22-10.0.0.1:53666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:03.365619 systemd[1]: Started sshd@67-10.0.0.61:22-10.0.0.1:53666.service - OpenSSH per-connection server daemon (10.0.0.1:53666). Jan 28 01:35:03.901000 audit[7864]: USER_ACCT pid=7864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:03.919505 sshd[7864]: Accepted publickey for core from 10.0.0.1 port 53666 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:35:04.017494 sshd-session[7864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:35:04.066398 kernel: audit: type=1101 audit(1769564103.901:1321): pid=7864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:04.000000 audit[7864]: CRED_ACQ pid=7864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:04.248813 kernel: audit: type=1103 audit(1769564104.000:1322): pid=7864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:04.249663 kernel: audit: type=1006 audit(1769564104.002:1323): pid=7864 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=69 res=1 Jan 28 01:35:04.249724 kernel: audit: type=1300 audit(1769564104.002:1323): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc5789e80 a2=3 a3=0 items=0 ppid=1 pid=7864 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=69 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:04.002000 audit[7864]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc5789e80 a2=3 a3=0 items=0 ppid=1 pid=7864 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=69 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:04.263620 systemd-logind[1590]: New session 69 of user core. Jan 28 01:35:04.357929 kernel: audit: type=1327 audit(1769564104.002:1323): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:35:04.002000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:35:04.420554 systemd[1]: Started session-69.scope - Session 69 of User core. Jan 28 01:35:04.470000 audit[7864]: USER_START pid=7864 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:04.594144 kernel: audit: type=1105 audit(1769564104.470:1324): pid=7864 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:04.476000 audit[7868]: CRED_ACQ pid=7868 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:04.727821 kernel: audit: type=1103 audit(1769564104.476:1325): pid=7868 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:05.628364 sshd[7868]: Connection closed by 10.0.0.1 port 53666 Jan 28 01:35:05.628462 sshd-session[7864]: pam_unix(sshd:session): session closed for user core Jan 28 01:35:05.670000 audit[7864]: USER_END pid=7864 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:05.703716 systemd[1]: sshd@67-10.0.0.61:22-10.0.0.1:53666.service: Deactivated successfully. Jan 28 01:35:05.671000 audit[7864]: CRED_DISP pid=7864 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:05.729799 systemd[1]: session-69.scope: Deactivated successfully. Jan 28 01:35:05.749629 systemd-logind[1590]: Session 69 logged out. Waiting for processes to exit. Jan 28 01:35:05.777626 kernel: audit: type=1106 audit(1769564105.670:1326): pid=7864 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:05.777882 kernel: audit: type=1104 audit(1769564105.671:1327): pid=7864 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:05.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@67-10.0.0.61:22-10.0.0.1:53666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:05.783608 systemd-logind[1590]: Removed session 69. Jan 28 01:35:08.074784 kubelet[2995]: E0128 01:35:08.067973 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:35:09.079403 kubelet[2995]: E0128 01:35:09.078437 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:35:10.061573 kubelet[2995]: E0128 01:35:10.060631 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:10.063127 kubelet[2995]: E0128 01:35:10.062938 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:35:10.077907 kubelet[2995]: E0128 01:35:10.075742 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:35:10.748903 systemd[1]: Started sshd@68-10.0.0.61:22-10.0.0.1:53668.service - OpenSSH per-connection server daemon (10.0.0.1:53668). Jan 28 01:35:10.872337 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:35:10.872518 kernel: audit: type=1130 audit(1769564110.748:1329): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@68-10.0.0.61:22-10.0.0.1:53668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:10.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@68-10.0.0.61:22-10.0.0.1:53668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:11.231590 sshd[7881]: Accepted publickey for core from 10.0.0.1 port 53668 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:35:11.224000 audit[7881]: USER_ACCT pid=7881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:11.243781 sshd-session[7881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:35:11.311578 systemd-logind[1590]: New session 70 of user core. Jan 28 01:35:11.229000 audit[7881]: CRED_ACQ pid=7881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:11.408776 kernel: audit: type=1101 audit(1769564111.224:1330): pid=7881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:11.408951 kernel: audit: type=1103 audit(1769564111.229:1331): pid=7881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:11.409303 kernel: audit: type=1006 audit(1769564111.230:1332): pid=7881 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=70 res=1 Jan 28 01:35:11.230000 audit[7881]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3788c9f0 a2=3 a3=0 items=0 ppid=1 pid=7881 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=70 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:11.511607 systemd[1]: Started session-70.scope - Session 70 of User core. Jan 28 01:35:11.565429 kernel: audit: type=1300 audit(1769564111.230:1332): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3788c9f0 a2=3 a3=0 items=0 ppid=1 pid=7881 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=70 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:11.565569 kernel: audit: type=1327 audit(1769564111.230:1332): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:35:11.230000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:35:11.584000 audit[7881]: USER_START pid=7881 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:11.617000 audit[7885]: CRED_ACQ pid=7885 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:11.767483 kernel: audit: type=1105 audit(1769564111.584:1333): pid=7881 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:11.767635 kernel: audit: type=1103 audit(1769564111.617:1334): pid=7885 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:12.390622 sshd[7885]: Connection closed by 10.0.0.1 port 53668 Jan 28 01:35:12.396589 sshd-session[7881]: pam_unix(sshd:session): session closed for user core Jan 28 01:35:12.410000 audit[7881]: USER_END pid=7881 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:12.437142 systemd[1]: sshd@68-10.0.0.61:22-10.0.0.1:53668.service: Deactivated successfully. Jan 28 01:35:12.448745 systemd[1]: session-70.scope: Deactivated successfully. Jan 28 01:35:12.472956 systemd-logind[1590]: Session 70 logged out. Waiting for processes to exit. Jan 28 01:35:12.491985 systemd-logind[1590]: Removed session 70. Jan 28 01:35:12.537643 kernel: audit: type=1106 audit(1769564112.410:1335): pid=7881 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:12.417000 audit[7881]: CRED_DISP pid=7881 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:12.603359 kernel: audit: type=1104 audit(1769564112.417:1336): pid=7881 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:12.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@68-10.0.0.61:22-10.0.0.1:53668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:14.111653 kubelet[2995]: E0128 01:35:14.109960 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:35:17.093256 kubelet[2995]: E0128 01:35:17.092932 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:35:17.563648 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:35:17.563811 kernel: audit: type=1130 audit(1769564117.482:1338): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@69-10.0.0.61:22-10.0.0.1:34720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:17.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@69-10.0.0.61:22-10.0.0.1:34720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:17.483757 systemd[1]: Started sshd@69-10.0.0.61:22-10.0.0.1:34720.service - OpenSSH per-connection server daemon (10.0.0.1:34720). Jan 28 01:35:18.066466 kubelet[2995]: E0128 01:35:18.064827 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:18.410000 audit[7900]: USER_ACCT pid=7900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:18.433705 sshd[7900]: Accepted publickey for core from 10.0.0.1 port 34720 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:35:18.497454 sshd-session[7900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:35:18.596797 kernel: audit: type=1101 audit(1769564118.410:1339): pid=7900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:18.458000 audit[7900]: CRED_ACQ pid=7900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:18.679312 systemd-logind[1590]: New session 71 of user core. Jan 28 01:35:18.782582 kernel: audit: type=1103 audit(1769564118.458:1340): pid=7900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:18.798484 systemd[1]: Started session-71.scope - Session 71 of User core. Jan 28 01:35:18.898784 kernel: audit: type=1006 audit(1769564118.470:1341): pid=7900 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=71 res=1 Jan 28 01:35:18.470000 audit[7900]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6fb63840 a2=3 a3=0 items=0 ppid=1 pid=7900 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=71 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:19.054360 kernel: audit: type=1300 audit(1769564118.470:1341): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6fb63840 a2=3 a3=0 items=0 ppid=1 pid=7900 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=71 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:19.064700 kernel: audit: type=1327 audit(1769564118.470:1341): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:35:18.470000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:35:18.836000 audit[7900]: USER_START pid=7900 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:19.121538 kubelet[2995]: E0128 01:35:19.116626 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:19.153136 kubelet[2995]: E0128 01:35:19.152652 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:19.182938 kernel: audit: type=1105 audit(1769564118.836:1342): pid=7900 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:19.183280 kernel: audit: type=1103 audit(1769564118.881:1343): pid=7904 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:18.881000 audit[7904]: CRED_ACQ pid=7904 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:19.334575 kubelet[2995]: E0128 01:35:19.294528 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:35:20.926770 sshd[7904]: Connection closed by 10.0.0.1 port 34720 Jan 28 01:35:20.954978 sshd-session[7900]: pam_unix(sshd:session): session closed for user core Jan 28 01:35:20.987000 audit[7900]: USER_END pid=7900 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:21.024549 kernel: audit: type=1106 audit(1769564120.987:1344): pid=7900 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:21.027695 systemd[1]: sshd@69-10.0.0.61:22-10.0.0.1:34720.service: Deactivated successfully. Jan 28 01:35:20.994000 audit[7900]: CRED_DISP pid=7900 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:21.044136 systemd[1]: session-71.scope: Deactivated successfully. Jan 28 01:35:21.125271 kernel: audit: type=1104 audit(1769564120.994:1345): pid=7900 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:21.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@69-10.0.0.61:22-10.0.0.1:34720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:21.149143 kubelet[2995]: E0128 01:35:21.137783 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:35:21.149143 kubelet[2995]: E0128 01:35:21.137786 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:35:21.146262 systemd-logind[1590]: Session 71 logged out. Waiting for processes to exit. Jan 28 01:35:21.200902 systemd-logind[1590]: Removed session 71. Jan 28 01:35:24.168798 kubelet[2995]: E0128 01:35:24.133607 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:35:26.090708 kubelet[2995]: E0128 01:35:26.083171 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:35:26.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@70-10.0.0.61:22-10.0.0.1:56116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:26.209398 systemd[1]: Started sshd@70-10.0.0.61:22-10.0.0.1:56116.service - OpenSSH per-connection server daemon (10.0.0.1:56116). Jan 28 01:35:26.228683 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:35:26.228731 kernel: audit: type=1130 audit(1769564126.210:1347): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@70-10.0.0.61:22-10.0.0.1:56116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:27.200000 audit[7943]: USER_ACCT pid=7943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:27.268765 kernel: audit: type=1101 audit(1769564127.200:1348): pid=7943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:27.212326 sshd-session[7943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:35:27.269765 sshd[7943]: Accepted publickey for core from 10.0.0.1 port 56116 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:35:27.207000 audit[7943]: CRED_ACQ pid=7943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:27.321708 systemd-logind[1590]: New session 72 of user core. Jan 28 01:35:27.387301 kernel: audit: type=1103 audit(1769564127.207:1349): pid=7943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:27.387468 kernel: audit: type=1006 audit(1769564127.207:1350): pid=7943 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=72 res=1 Jan 28 01:35:27.207000 audit[7943]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1c0028e0 a2=3 a3=0 items=0 ppid=1 pid=7943 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=72 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:27.426239 kernel: audit: type=1300 audit(1769564127.207:1350): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1c0028e0 a2=3 a3=0 items=0 ppid=1 pid=7943 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=72 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:27.453962 kernel: audit: type=1327 audit(1769564127.207:1350): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:35:27.207000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:35:27.453713 systemd[1]: Started session-72.scope - Session 72 of User core. Jan 28 01:35:27.466000 audit[7943]: USER_START pid=7943 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:27.509455 kernel: audit: type=1105 audit(1769564127.466:1351): pid=7943 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:27.509621 kernel: audit: type=1103 audit(1769564127.481:1352): pid=7947 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:27.481000 audit[7947]: CRED_ACQ pid=7947 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:28.254135 sshd[7947]: Connection closed by 10.0.0.1 port 56116 Jan 28 01:35:28.265315 sshd-session[7943]: pam_unix(sshd:session): session closed for user core Jan 28 01:35:28.283000 audit[7943]: USER_END pid=7943 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:28.401541 kernel: audit: type=1106 audit(1769564128.283:1353): pid=7943 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:28.524876 kernel: audit: type=1104 audit(1769564128.283:1354): pid=7943 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:28.283000 audit[7943]: CRED_DISP pid=7943 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:28.704153 systemd[1]: sshd@70-10.0.0.61:22-10.0.0.1:56116.service: Deactivated successfully. Jan 28 01:35:28.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@70-10.0.0.61:22-10.0.0.1:56116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:28.779891 systemd[1]: session-72.scope: Deactivated successfully. Jan 28 01:35:28.793912 systemd-logind[1590]: Session 72 logged out. Waiting for processes to exit. Jan 28 01:35:28.800548 systemd-logind[1590]: Removed session 72. Jan 28 01:35:32.061724 kubelet[2995]: E0128 01:35:32.061685 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:32.064931 kubelet[2995]: E0128 01:35:32.063290 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:35:32.089939 kubelet[2995]: E0128 01:35:32.087984 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:35:32.089939 kubelet[2995]: E0128 01:35:32.089880 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:35:33.349537 systemd[1]: Started sshd@71-10.0.0.61:22-10.0.0.1:49910.service - OpenSSH per-connection server daemon (10.0.0.1:49910). Jan 28 01:35:33.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@71-10.0.0.61:22-10.0.0.1:49910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:33.372589 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:35:33.372660 kernel: audit: type=1130 audit(1769564133.357:1356): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@71-10.0.0.61:22-10.0.0.1:49910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:33.916000 audit[7962]: USER_ACCT pid=7962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:33.930386 sshd[7962]: Accepted publickey for core from 10.0.0.1 port 49910 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:35:33.962766 sshd-session[7962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:35:34.012097 kernel: audit: type=1101 audit(1769564133.916:1357): pid=7962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:34.019420 kernel: audit: type=1103 audit(1769564133.928:1358): pid=7962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:33.928000 audit[7962]: CRED_ACQ pid=7962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:34.071625 systemd-logind[1590]: New session 73 of user core. Jan 28 01:35:34.155403 kernel: audit: type=1006 audit(1769564133.928:1359): pid=7962 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=73 res=1 Jan 28 01:35:33.928000 audit[7962]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8ee24fd0 a2=3 a3=0 items=0 ppid=1 pid=7962 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=73 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:34.216401 kernel: audit: type=1300 audit(1769564133.928:1359): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8ee24fd0 a2=3 a3=0 items=0 ppid=1 pid=7962 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=73 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:34.216542 kernel: audit: type=1327 audit(1769564133.928:1359): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:35:33.928000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:35:34.216612 systemd[1]: Started session-73.scope - Session 73 of User core. Jan 28 01:35:34.250402 kernel: audit: type=1105 audit(1769564134.238:1360): pid=7962 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:34.238000 audit[7962]: USER_START pid=7962 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:34.239000 audit[7966]: CRED_ACQ pid=7966 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:34.400596 kernel: audit: type=1103 audit(1769564134.239:1361): pid=7966 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:36.199116 kubelet[2995]: E0128 01:35:36.193705 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.1s" Jan 28 01:35:36.225644 kubelet[2995]: E0128 01:35:36.224518 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:35:36.377334 sshd[7966]: Connection closed by 10.0.0.1 port 49910 Jan 28 01:35:36.379907 sshd-session[7962]: pam_unix(sshd:session): session closed for user core Jan 28 01:35:36.401000 audit[7962]: USER_END pid=7962 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:36.457702 kernel: audit: type=1106 audit(1769564136.401:1362): pid=7962 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:36.460888 systemd[1]: sshd@71-10.0.0.61:22-10.0.0.1:49910.service: Deactivated successfully. Jan 28 01:35:36.401000 audit[7962]: CRED_DISP pid=7962 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:36.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@71-10.0.0.61:22-10.0.0.1:49910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:36.482731 kernel: audit: type=1104 audit(1769564136.401:1363): pid=7962 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:36.508612 systemd[1]: session-73.scope: Deactivated successfully. Jan 28 01:35:36.549618 systemd-logind[1590]: Session 73 logged out. Waiting for processes to exit. Jan 28 01:35:36.566262 systemd-logind[1590]: Removed session 73. Jan 28 01:35:37.065793 kubelet[2995]: E0128 01:35:37.064358 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:38.063044 kubelet[2995]: E0128 01:35:38.062410 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:35:38.096405 kubelet[2995]: E0128 01:35:38.071882 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:35:41.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@72-10.0.0.61:22-10.0.0.1:49924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:41.430922 systemd[1]: Started sshd@72-10.0.0.61:22-10.0.0.1:49924.service - OpenSSH per-connection server daemon (10.0.0.1:49924). Jan 28 01:35:41.470313 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:35:41.472157 kernel: audit: type=1130 audit(1769564141.430:1365): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@72-10.0.0.61:22-10.0.0.1:49924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:41.741000 audit[7993]: USER_ACCT pid=7993 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:41.759440 sshd[7993]: Accepted publickey for core from 10.0.0.1 port 49924 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:35:41.771542 sshd-session[7993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:35:41.763000 audit[7993]: CRED_ACQ pid=7993 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:41.815430 systemd-logind[1590]: New session 74 of user core. Jan 28 01:35:41.845642 kernel: audit: type=1101 audit(1769564141.741:1366): pid=7993 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:41.846144 kernel: audit: type=1103 audit(1769564141.763:1367): pid=7993 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:41.851150 kernel: audit: type=1006 audit(1769564141.763:1368): pid=7993 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=74 res=1 Jan 28 01:35:41.763000 audit[7993]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb2f330a0 a2=3 a3=0 items=0 ppid=1 pid=7993 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=74 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:41.899409 kernel: audit: type=1300 audit(1769564141.763:1368): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb2f330a0 a2=3 a3=0 items=0 ppid=1 pid=7993 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=74 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:41.763000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:35:41.909370 kernel: audit: type=1327 audit(1769564141.763:1368): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:35:41.907099 systemd[1]: Started session-74.scope - Session 74 of User core. Jan 28 01:35:41.929000 audit[7993]: USER_START pid=7993 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:42.019992 kernel: audit: type=1105 audit(1769564141.929:1369): pid=7993 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:42.020174 kernel: audit: type=1103 audit(1769564142.011:1370): pid=8004 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:42.011000 audit[8004]: CRED_ACQ pid=8004 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:43.009884 sshd[8004]: Connection closed by 10.0.0.1 port 49924 Jan 28 01:35:43.009685 sshd-session[7993]: pam_unix(sshd:session): session closed for user core Jan 28 01:35:43.023000 audit[7993]: USER_END pid=7993 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:43.153317 kernel: audit: type=1106 audit(1769564143.023:1371): pid=7993 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:43.047892 systemd[1]: sshd@72-10.0.0.61:22-10.0.0.1:49924.service: Deactivated successfully. Jan 28 01:35:43.080833 systemd[1]: session-74.scope: Deactivated successfully. Jan 28 01:35:43.152890 systemd-logind[1590]: Session 74 logged out. Waiting for processes to exit. Jan 28 01:35:43.168386 systemd-logind[1590]: Removed session 74. Jan 28 01:35:43.023000 audit[7993]: CRED_DISP pid=7993 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:43.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@72-10.0.0.61:22-10.0.0.1:49924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:43.262183 kernel: audit: type=1104 audit(1769564143.023:1372): pid=7993 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:45.109495 kubelet[2995]: E0128 01:35:45.108420 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:35:45.109495 kubelet[2995]: E0128 01:35:45.108798 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:35:47.152366 kubelet[2995]: E0128 01:35:47.149913 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:35:47.152366 kubelet[2995]: E0128 01:35:47.150492 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:35:48.100930 systemd[1]: Started sshd@73-10.0.0.61:22-10.0.0.1:35950.service - OpenSSH per-connection server daemon (10.0.0.1:35950). Jan 28 01:35:48.128356 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:35:48.128409 kernel: audit: type=1130 audit(1769564148.101:1374): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@73-10.0.0.61:22-10.0.0.1:35950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:48.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@73-10.0.0.61:22-10.0.0.1:35950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:48.408000 audit[8018]: USER_ACCT pid=8018 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:48.420729 sshd[8018]: Accepted publickey for core from 10.0.0.1 port 35950 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:35:48.423282 sshd-session[8018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:35:48.419000 audit[8018]: CRED_ACQ pid=8018 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:48.492409 systemd-logind[1590]: New session 75 of user core. Jan 28 01:35:48.499763 kernel: audit: type=1101 audit(1769564148.408:1375): pid=8018 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:48.499874 kernel: audit: type=1103 audit(1769564148.419:1376): pid=8018 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:48.522190 kernel: audit: type=1006 audit(1769564148.419:1377): pid=8018 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=75 res=1 Jan 28 01:35:48.419000 audit[8018]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffcd0c3f0 a2=3 a3=0 items=0 ppid=1 pid=8018 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=75 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:48.523586 systemd[1]: Started session-75.scope - Session 75 of User core. Jan 28 01:35:48.569907 kernel: audit: type=1300 audit(1769564148.419:1377): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffcd0c3f0 a2=3 a3=0 items=0 ppid=1 pid=8018 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=75 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:48.570159 kernel: audit: type=1327 audit(1769564148.419:1377): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:35:48.419000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:35:48.560000 audit[8018]: USER_START pid=8018 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:48.627518 kernel: audit: type=1105 audit(1769564148.560:1378): pid=8018 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:48.582000 audit[8023]: CRED_ACQ pid=8023 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:48.681459 kernel: audit: type=1103 audit(1769564148.582:1379): pid=8023 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:49.277386 sshd[8023]: Connection closed by 10.0.0.1 port 35950 Jan 28 01:35:49.277459 sshd-session[8018]: pam_unix(sshd:session): session closed for user core Jan 28 01:35:49.283000 audit[8018]: USER_END pid=8018 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:49.302742 systemd[1]: sshd@73-10.0.0.61:22-10.0.0.1:35950.service: Deactivated successfully. Jan 28 01:35:49.311333 systemd[1]: session-75.scope: Deactivated successfully. Jan 28 01:35:49.317573 systemd-logind[1590]: Session 75 logged out. Waiting for processes to exit. Jan 28 01:35:49.320108 kernel: audit: type=1106 audit(1769564149.283:1380): pid=8018 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:49.283000 audit[8018]: CRED_DISP pid=8018 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:49.322810 systemd-logind[1590]: Removed session 75. Jan 28 01:35:49.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@73-10.0.0.61:22-10.0.0.1:35950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:49.336181 kernel: audit: type=1104 audit(1769564149.283:1381): pid=8018 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:52.072717 kubelet[2995]: E0128 01:35:52.071775 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:35:54.062701 kubelet[2995]: E0128 01:35:54.062384 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:35:54.364895 systemd[1]: Started sshd@74-10.0.0.61:22-10.0.0.1:59382.service - OpenSSH per-connection server daemon (10.0.0.1:59382). Jan 28 01:35:54.376795 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:35:54.376922 kernel: audit: type=1130 audit(1769564154.363:1383): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@74-10.0.0.61:22-10.0.0.1:59382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:54.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@74-10.0.0.61:22-10.0.0.1:59382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:54.590539 sshd[8065]: Accepted publickey for core from 10.0.0.1 port 59382 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:35:54.587000 audit[8065]: USER_ACCT pid=8065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:54.617859 sshd-session[8065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:35:54.635949 kernel: audit: type=1101 audit(1769564154.587:1384): pid=8065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:54.600000 audit[8065]: CRED_ACQ pid=8065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:54.663707 systemd-logind[1590]: New session 76 of user core. Jan 28 01:35:54.673096 kernel: audit: type=1103 audit(1769564154.600:1385): pid=8065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:54.600000 audit[8065]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdacafb090 a2=3 a3=0 items=0 ppid=1 pid=8065 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=76 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:54.782780 kernel: audit: type=1006 audit(1769564154.600:1386): pid=8065 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=76 res=1 Jan 28 01:35:54.783110 kernel: audit: type=1300 audit(1769564154.600:1386): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdacafb090 a2=3 a3=0 items=0 ppid=1 pid=8065 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=76 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:35:54.783177 kernel: audit: type=1327 audit(1769564154.600:1386): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:35:54.600000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:35:54.827630 systemd[1]: Started session-76.scope - Session 76 of User core. Jan 28 01:35:54.860000 audit[8065]: USER_START pid=8065 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:55.011670 kernel: audit: type=1105 audit(1769564154.860:1387): pid=8065 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:55.011834 kernel: audit: type=1103 audit(1769564154.885:1388): pid=8070 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:54.885000 audit[8070]: CRED_ACQ pid=8070 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:55.379423 sshd[8070]: Connection closed by 10.0.0.1 port 59382 Jan 28 01:35:55.382399 sshd-session[8065]: pam_unix(sshd:session): session closed for user core Jan 28 01:35:55.386000 audit[8065]: USER_END pid=8065 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:55.394458 systemd[1]: sshd@74-10.0.0.61:22-10.0.0.1:59382.service: Deactivated successfully. Jan 28 01:35:55.418750 systemd[1]: session-76.scope: Deactivated successfully. Jan 28 01:35:55.442912 systemd-logind[1590]: Session 76 logged out. Waiting for processes to exit. Jan 28 01:35:55.466427 kernel: audit: type=1106 audit(1769564155.386:1389): pid=8065 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:55.466751 kernel: audit: type=1104 audit(1769564155.386:1390): pid=8065 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:55.386000 audit[8065]: CRED_DISP pid=8065 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:35:55.466166 systemd-logind[1590]: Removed session 76. Jan 28 01:35:55.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@74-10.0.0.61:22-10.0.0.1:59382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:35:57.082100 kubelet[2995]: E0128 01:35:57.081552 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:57.107884 kubelet[2995]: E0128 01:35:57.107752 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:35:58.096894 kubelet[2995]: E0128 01:35:58.096815 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:36:00.063138 kubelet[2995]: E0128 01:36:00.062928 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:36:00.431103 systemd[1]: Started sshd@75-10.0.0.61:22-10.0.0.1:59392.service - OpenSSH per-connection server daemon (10.0.0.1:59392). Jan 28 01:36:00.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@75-10.0.0.61:22-10.0.0.1:59392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:36:00.447185 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:36:00.449614 kernel: audit: type=1130 audit(1769564160.432:1392): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@75-10.0.0.61:22-10.0.0.1:59392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:36:00.808000 audit[8084]: USER_ACCT pid=8084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:00.812575 sshd-session[8084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:36:00.820941 sshd[8084]: Accepted publickey for core from 10.0.0.1 port 59392 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:36:00.878191 kernel: audit: type=1101 audit(1769564160.808:1393): pid=8084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:00.810000 audit[8084]: CRED_ACQ pid=8084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:00.969666 kernel: audit: type=1103 audit(1769564160.810:1394): pid=8084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:01.030176 kernel: audit: type=1006 audit(1769564160.810:1395): pid=8084 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=77 res=1 Jan 28 01:36:01.085593 kernel: audit: type=1300 audit(1769564160.810:1395): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd39de90c0 a2=3 a3=0 items=0 ppid=1 pid=8084 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=77 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:00.810000 audit[8084]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd39de90c0 a2=3 a3=0 items=0 ppid=1 pid=8084 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=77 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:00.810000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:36:01.094146 kernel: audit: type=1327 audit(1769564160.810:1395): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:36:01.164912 kubelet[2995]: E0128 01:36:01.164618 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:36:01.193131 systemd-logind[1590]: New session 77 of user core. Jan 28 01:36:01.241989 systemd[1]: Started session-77.scope - Session 77 of User core. Jan 28 01:36:01.329428 kernel: audit: type=1105 audit(1769564161.301:1396): pid=8084 uid=0 auid=500 ses=77 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:01.301000 audit[8084]: USER_START pid=8084 uid=0 auid=500 ses=77 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:01.332000 audit[8090]: CRED_ACQ pid=8090 uid=0 auid=500 ses=77 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:01.353425 kernel: audit: type=1103 audit(1769564161.332:1397): pid=8090 uid=0 auid=500 ses=77 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:01.771157 sshd[8090]: Connection closed by 10.0.0.1 port 59392 Jan 28 01:36:01.770364 sshd-session[8084]: pam_unix(sshd:session): session closed for user core Jan 28 01:36:01.779000 audit[8084]: USER_END pid=8084 uid=0 auid=500 ses=77 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:01.825625 systemd[1]: sshd@75-10.0.0.61:22-10.0.0.1:59392.service: Deactivated successfully. Jan 28 01:36:01.833892 systemd[1]: session-77.scope: Deactivated successfully. Jan 28 01:36:01.780000 audit[8084]: CRED_DISP pid=8084 uid=0 auid=500 ses=77 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:01.841150 systemd-logind[1590]: Session 77 logged out. Waiting for processes to exit. Jan 28 01:36:01.856475 systemd-logind[1590]: Removed session 77. Jan 28 01:36:01.893913 kernel: audit: type=1106 audit(1769564161.779:1398): pid=8084 uid=0 auid=500 ses=77 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:01.894165 kernel: audit: type=1104 audit(1769564161.780:1399): pid=8084 uid=0 auid=500 ses=77 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:01.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@75-10.0.0.61:22-10.0.0.1:59392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:36:06.070891 kubelet[2995]: E0128 01:36:06.065886 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:36:06.078571 kubelet[2995]: E0128 01:36:06.078386 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:36:06.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@76-10.0.0.61:22-10.0.0.1:55442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:36:06.887225 systemd[1]: Started sshd@76-10.0.0.61:22-10.0.0.1:55442.service - OpenSSH per-connection server daemon (10.0.0.1:55442). Jan 28 01:36:06.965696 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:36:06.965862 kernel: audit: type=1130 audit(1769564166.887:1401): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@76-10.0.0.61:22-10.0.0.1:55442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:36:07.554000 audit[8103]: USER_ACCT pid=8103 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:07.575189 sshd-session[8103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:36:07.637656 kernel: audit: type=1101 audit(1769564167.554:1402): pid=8103 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:07.637713 sshd[8103]: Accepted publickey for core from 10.0.0.1 port 55442 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:36:07.566000 audit[8103]: CRED_ACQ pid=8103 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:07.670435 systemd-logind[1590]: New session 78 of user core. Jan 28 01:36:07.756378 kernel: audit: type=1103 audit(1769564167.566:1403): pid=8103 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:07.756540 kernel: audit: type=1006 audit(1769564167.566:1404): pid=8103 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=78 res=1 Jan 28 01:36:07.756605 kernel: audit: type=1300 audit(1769564167.566:1404): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd43435240 a2=3 a3=0 items=0 ppid=1 pid=8103 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=78 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:07.566000 audit[8103]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd43435240 a2=3 a3=0 items=0 ppid=1 pid=8103 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=78 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:07.566000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:36:07.819752 systemd[1]: Started session-78.scope - Session 78 of User core. Jan 28 01:36:07.874811 kernel: audit: type=1327 audit(1769564167.566:1404): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:36:07.874937 kernel: audit: type=1105 audit(1769564167.868:1405): pid=8103 uid=0 auid=500 ses=78 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:07.868000 audit[8103]: USER_START pid=8103 uid=0 auid=500 ses=78 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:07.958988 kernel: audit: type=1103 audit(1769564167.896:1406): pid=8107 uid=0 auid=500 ses=78 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:07.896000 audit[8107]: CRED_ACQ pid=8107 uid=0 auid=500 ses=78 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:08.594846 sshd[8107]: Connection closed by 10.0.0.1 port 55442 Jan 28 01:36:08.592398 sshd-session[8103]: pam_unix(sshd:session): session closed for user core Jan 28 01:36:08.610000 audit[8103]: USER_END pid=8103 uid=0 auid=500 ses=78 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:08.631000 audit[8103]: CRED_DISP pid=8103 uid=0 auid=500 ses=78 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:08.669813 systemd[1]: sshd@76-10.0.0.61:22-10.0.0.1:55442.service: Deactivated successfully. Jan 28 01:36:08.679174 systemd[1]: session-78.scope: Deactivated successfully. Jan 28 01:36:08.691925 kernel: audit: type=1106 audit(1769564168.610:1407): pid=8103 uid=0 auid=500 ses=78 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:08.692415 kernel: audit: type=1104 audit(1769564168.631:1408): pid=8103 uid=0 auid=500 ses=78 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:08.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@76-10.0.0.61:22-10.0.0.1:55442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:36:08.695153 systemd-logind[1590]: Session 78 logged out. Waiting for processes to exit. Jan 28 01:36:08.704743 systemd-logind[1590]: Removed session 78. Jan 28 01:36:10.081863 kubelet[2995]: E0128 01:36:10.080562 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:36:11.067953 kubelet[2995]: E0128 01:36:11.067896 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:36:13.076922 kubelet[2995]: E0128 01:36:13.076817 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:36:13.631553 systemd[1]: Started sshd@77-10.0.0.61:22-10.0.0.1:54902.service - OpenSSH per-connection server daemon (10.0.0.1:54902). Jan 28 01:36:13.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@77-10.0.0.61:22-10.0.0.1:54902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:36:13.660118 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:36:13.660320 kernel: audit: type=1130 audit(1769564173.629:1410): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@77-10.0.0.61:22-10.0.0.1:54902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:36:13.986000 audit[8121]: USER_ACCT pid=8121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:13.994479 sshd[8121]: Accepted publickey for core from 10.0.0.1 port 54902 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:36:14.034233 kernel: audit: type=1101 audit(1769564173.986:1411): pid=8121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:14.034431 kernel: audit: type=1103 audit(1769564174.015:1412): pid=8121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:14.015000 audit[8121]: CRED_ACQ pid=8121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:14.025388 sshd-session[8121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:36:14.067303 kubelet[2995]: E0128 01:36:14.064381 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:36:14.077877 kernel: audit: type=1006 audit(1769564174.015:1413): pid=8121 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=79 res=1 Jan 28 01:36:14.015000 audit[8121]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe297ded20 a2=3 a3=0 items=0 ppid=1 pid=8121 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=79 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:14.015000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:36:14.168531 kernel: audit: type=1300 audit(1769564174.015:1413): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe297ded20 a2=3 a3=0 items=0 ppid=1 pid=8121 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=79 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:14.168665 kernel: audit: type=1327 audit(1769564174.015:1413): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:36:14.177977 systemd-logind[1590]: New session 79 of user core. Jan 28 01:36:14.203093 systemd[1]: Started session-79.scope - Session 79 of User core. Jan 28 01:36:14.227000 audit[8121]: USER_START pid=8121 uid=0 auid=500 ses=79 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:14.342896 kernel: audit: type=1105 audit(1769564174.227:1414): pid=8121 uid=0 auid=500 ses=79 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:14.288000 audit[8125]: CRED_ACQ pid=8125 uid=0 auid=500 ses=79 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:14.404433 kernel: audit: type=1103 audit(1769564174.288:1415): pid=8125 uid=0 auid=500 ses=79 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:15.208556 sshd[8125]: Connection closed by 10.0.0.1 port 54902 Jan 28 01:36:15.234666 sshd-session[8121]: pam_unix(sshd:session): session closed for user core Jan 28 01:36:15.256000 audit[8121]: USER_END pid=8121 uid=0 auid=500 ses=79 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:15.327762 systemd-logind[1590]: Session 79 logged out. Waiting for processes to exit. Jan 28 01:36:15.371625 kernel: audit: type=1106 audit(1769564175.256:1416): pid=8121 uid=0 auid=500 ses=79 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:15.371854 kernel: audit: type=1104 audit(1769564175.257:1417): pid=8121 uid=0 auid=500 ses=79 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:15.257000 audit[8121]: CRED_DISP pid=8121 uid=0 auid=500 ses=79 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:15.364905 systemd[1]: sshd@77-10.0.0.61:22-10.0.0.1:54902.service: Deactivated successfully. Jan 28 01:36:15.384570 systemd[1]: session-79.scope: Deactivated successfully. Jan 28 01:36:15.408668 systemd-logind[1590]: Removed session 79. Jan 28 01:36:15.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@77-10.0.0.61:22-10.0.0.1:54902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:36:18.077515 kubelet[2995]: E0128 01:36:18.074897 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:18.172979 kubelet[2995]: E0128 01:36:18.169332 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:36:19.093695 kubelet[2995]: E0128 01:36:19.093629 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:36:22.282126 systemd[1]: Started sshd@78-10.0.0.61:22-10.0.0.1:54918.service - OpenSSH per-connection server daemon (10.0.0.1:54918). Jan 28 01:36:22.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@78-10.0.0.61:22-10.0.0.1:54918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:36:22.331536 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:36:22.335163 kernel: audit: type=1130 audit(1769564182.276:1419): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@78-10.0.0.61:22-10.0.0.1:54918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:36:22.335231 kubelet[2995]: E0128 01:36:22.305740 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.133s" Jan 28 01:36:22.335231 kubelet[2995]: E0128 01:36:22.309471 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:24.872410 kubelet[2995]: E0128 01:36:24.866376 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.806s" Jan 28 01:36:24.959925 kubelet[2995]: E0128 01:36:24.947483 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:36:24.959925 kubelet[2995]: E0128 01:36:24.948856 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:36:25.161698 kubelet[2995]: E0128 01:36:25.161526 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:36:25.434000 audit[8158]: USER_ACCT pid=8158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:25.439429 sshd[8158]: Accepted publickey for core from 10.0.0.1 port 54918 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:36:25.464404 sshd-session[8158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:36:25.490095 kernel: audit: type=1101 audit(1769564185.434:1420): pid=8158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:25.452000 audit[8158]: CRED_ACQ pid=8158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:25.571203 systemd-logind[1590]: New session 80 of user core. Jan 28 01:36:25.589318 kernel: audit: type=1103 audit(1769564185.452:1421): pid=8158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:25.589378 kernel: audit: type=1006 audit(1769564185.452:1422): pid=8158 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=80 res=1 Jan 28 01:36:25.621172 kernel: audit: type=1300 audit(1769564185.452:1422): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce8c44520 a2=3 a3=0 items=0 ppid=1 pid=8158 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=80 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:25.452000 audit[8158]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce8c44520 a2=3 a3=0 items=0 ppid=1 pid=8158 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=80 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:25.452000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:36:25.792992 kernel: audit: type=1327 audit(1769564185.452:1422): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:36:25.732667 systemd[1]: Started session-80.scope - Session 80 of User core. Jan 28 01:36:25.759000 audit[8158]: USER_START pid=8158 uid=0 auid=500 ses=80 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:25.917134 kernel: audit: type=1105 audit(1769564185.759:1423): pid=8158 uid=0 auid=500 ses=80 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:25.917371 kernel: audit: type=1103 audit(1769564185.767:1424): pid=8169 uid=0 auid=500 ses=80 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:25.767000 audit[8169]: CRED_ACQ pid=8169 uid=0 auid=500 ses=80 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:26.076911 kubelet[2995]: E0128 01:36:26.076412 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:26.863988 sshd[8169]: Connection closed by 10.0.0.1 port 54918 Jan 28 01:36:26.884770 sshd-session[8158]: pam_unix(sshd:session): session closed for user core Jan 28 01:36:26.887000 audit[8158]: USER_END pid=8158 uid=0 auid=500 ses=80 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:26.945931 systemd[1]: sshd@78-10.0.0.61:22-10.0.0.1:54918.service: Deactivated successfully. Jan 28 01:36:26.975418 systemd[1]: session-80.scope: Deactivated successfully. Jan 28 01:36:26.978121 systemd-logind[1590]: Session 80 logged out. Waiting for processes to exit. Jan 28 01:36:26.980816 systemd-logind[1590]: Removed session 80. Jan 28 01:36:26.887000 audit[8158]: CRED_DISP pid=8158 uid=0 auid=500 ses=80 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:27.085761 kubelet[2995]: E0128 01:36:27.085430 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:36:27.103224 kernel: audit: type=1106 audit(1769564186.887:1425): pid=8158 uid=0 auid=500 ses=80 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:27.103640 kernel: audit: type=1104 audit(1769564186.887:1426): pid=8158 uid=0 auid=500 ses=80 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:26.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@78-10.0.0.61:22-10.0.0.1:54918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:36:29.098955 kubelet[2995]: E0128 01:36:29.097186 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:30.099967 kubelet[2995]: E0128 01:36:30.099809 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:36:31.181588 kubelet[2995]: E0128 01:36:31.180115 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:36:32.118199 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:36:32.118418 kernel: audit: type=1130 audit(1769564191.991:1428): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@79-10.0.0.61:22-10.0.0.1:58132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:36:31.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@79-10.0.0.61:22-10.0.0.1:58132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:36:31.992672 systemd[1]: Started sshd@79-10.0.0.61:22-10.0.0.1:58132.service - OpenSSH per-connection server daemon (10.0.0.1:58132). Jan 28 01:36:32.630000 audit[8184]: USER_ACCT pid=8184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:32.640515 sshd[8184]: Accepted publickey for core from 10.0.0.1 port 58132 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:36:32.679479 sshd-session[8184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:36:32.701125 kernel: audit: type=1101 audit(1769564192.630:1429): pid=8184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:32.663000 audit[8184]: CRED_ACQ pid=8184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:32.777239 systemd-logind[1590]: New session 81 of user core. Jan 28 01:36:32.822431 kernel: audit: type=1103 audit(1769564192.663:1430): pid=8184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:32.822594 kernel: audit: type=1006 audit(1769564192.663:1431): pid=8184 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=81 res=1 Jan 28 01:36:32.840677 kernel: audit: type=1300 audit(1769564192.663:1431): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcfb675610 a2=3 a3=0 items=0 ppid=1 pid=8184 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=81 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:32.663000 audit[8184]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcfb675610 a2=3 a3=0 items=0 ppid=1 pid=8184 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=81 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:32.663000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:36:33.060687 systemd[1]: Started session-81.scope - Session 81 of User core. Jan 28 01:36:33.117348 kernel: audit: type=1327 audit(1769564192.663:1431): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:36:33.133000 audit[8184]: USER_START pid=8184 uid=0 auid=500 ses=81 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:33.312452 kernel: audit: type=1105 audit(1769564193.133:1432): pid=8184 uid=0 auid=500 ses=81 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:33.312606 kernel: audit: type=1103 audit(1769564193.171:1433): pid=8188 uid=0 auid=500 ses=81 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:33.171000 audit[8188]: CRED_ACQ pid=8188 uid=0 auid=500 ses=81 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:40.800505 kernel: sched: DL replenish lagged too much Jan 28 01:36:40.872587 kubelet[2995]: E0128 01:36:40.872545 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:41.090347 sshd[8188]: Connection closed by 10.0.0.1 port 58132 Jan 28 01:36:41.095000 audit[8184]: USER_END pid=8184 uid=0 auid=500 ses=81 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:41.075905 sshd-session[8184]: pam_unix(sshd:session): session closed for user core Jan 28 01:36:41.115250 systemd[1]: sshd@79-10.0.0.61:22-10.0.0.1:58132.service: Deactivated successfully. Jan 28 01:36:41.116574 systemd-logind[1590]: Session 81 logged out. Waiting for processes to exit. Jan 28 01:36:41.124807 systemd[1]: session-81.scope: Deactivated successfully. Jan 28 01:36:41.145851 systemd-logind[1590]: Removed session 81. Jan 28 01:36:41.162638 kubelet[2995]: E0128 01:36:41.162397 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:36:41.179380 kernel: audit: type=1106 audit(1769564201.095:1434): pid=8184 uid=0 auid=500 ses=81 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:41.179627 kernel: audit: type=1104 audit(1769564201.095:1435): pid=8184 uid=0 auid=500 ses=81 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:41.095000 audit[8184]: CRED_DISP pid=8184 uid=0 auid=500 ses=81 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:41.179802 kubelet[2995]: E0128 01:36:41.165877 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:36:41.187776 kubelet[2995]: E0128 01:36:41.186835 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:36:41.195511 kubelet[2995]: E0128 01:36:41.188938 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:36:41.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@79-10.0.0.61:22-10.0.0.1:58132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:36:41.344814 kernel: audit: type=1131 audit(1769564201.115:1436): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@79-10.0.0.61:22-10.0.0.1:58132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:36:41.551604 kubelet[2995]: E0128 01:36:41.551543 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:36:45.124932 containerd[1624]: time="2026-01-28T01:36:45.123753832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:36:45.309172 containerd[1624]: time="2026-01-28T01:36:45.304740974Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:36:45.317431 containerd[1624]: time="2026-01-28T01:36:45.311585091Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:36:45.317431 containerd[1624]: time="2026-01-28T01:36:45.311699956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:36:45.319706 kubelet[2995]: E0128 01:36:45.312986 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:36:45.319706 kubelet[2995]: E0128 01:36:45.316454 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:36:45.326513 kubelet[2995]: E0128 01:36:45.325922 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwzkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59889c77b-9msjb_calico-apiserver(63a937f8-f218-45d1-87c6-b75ad5fcad55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:36:45.330362 kubelet[2995]: E0128 01:36:45.330309 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:36:46.264505 kernel: audit: type=1130 audit(1769564206.178:1437): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@80-10.0.0.61:22-10.0.0.1:52302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:36:46.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@80-10.0.0.61:22-10.0.0.1:52302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:36:46.181743 systemd[1]: Started sshd@80-10.0.0.61:22-10.0.0.1:52302.service - OpenSSH per-connection server daemon (10.0.0.1:52302). Jan 28 01:36:47.105000 audit[8201]: USER_ACCT pid=8201 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:47.232105 kernel: audit: type=1101 audit(1769564207.105:1438): pid=8201 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:47.234442 sshd[8201]: Accepted publickey for core from 10.0.0.1 port 52302 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:36:47.243000 audit[8201]: CRED_ACQ pid=8201 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:47.256623 sshd-session[8201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:36:47.353212 kernel: audit: type=1103 audit(1769564207.243:1439): pid=8201 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:47.431446 systemd-logind[1590]: New session 82 of user core. Jan 28 01:36:47.456495 kernel: audit: type=1006 audit(1769564207.243:1440): pid=8201 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=82 res=1 Jan 28 01:36:47.243000 audit[8201]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2ef35720 a2=3 a3=0 items=0 ppid=1 pid=8201 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=82 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:47.243000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:36:47.602359 kernel: audit: type=1300 audit(1769564207.243:1440): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2ef35720 a2=3 a3=0 items=0 ppid=1 pid=8201 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=82 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:47.602469 kernel: audit: type=1327 audit(1769564207.243:1440): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:36:47.601812 systemd[1]: Started session-82.scope - Session 82 of User core. Jan 28 01:36:47.610000 audit[8206]: NETFILTER_CFG table=filter:147 family=2 entries=26 op=nft_register_rule pid=8206 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:36:47.610000 audit[8206]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd0e35c480 a2=0 a3=7ffd0e35c46c items=0 ppid=3100 pid=8206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:47.736929 kernel: audit: type=1325 audit(1769564207.610:1441): table=filter:147 family=2 entries=26 op=nft_register_rule pid=8206 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:36:47.737195 kernel: audit: type=1300 audit(1769564207.610:1441): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd0e35c480 a2=0 a3=7ffd0e35c46c items=0 ppid=3100 pid=8206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:47.739570 kernel: audit: type=1327 audit(1769564207.610:1441): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:36:47.610000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:36:47.791470 kernel: audit: type=1105 audit(1769564207.681:1442): pid=8201 uid=0 auid=500 ses=82 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:47.681000 audit[8201]: USER_START pid=8201 uid=0 auid=500 ses=82 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:47.706000 audit[8207]: CRED_ACQ pid=8207 uid=0 auid=500 ses=82 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:47.935000 audit[8206]: NETFILTER_CFG table=nat:148 family=2 entries=104 op=nft_register_chain pid=8206 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:36:47.935000 audit[8206]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd0e35c480 a2=0 a3=7ffd0e35c46c items=0 ppid=3100 pid=8206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:47.935000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:36:48.730385 sshd[8207]: Connection closed by 10.0.0.1 port 52302 Jan 28 01:36:48.731722 sshd-session[8201]: pam_unix(sshd:session): session closed for user core Jan 28 01:36:48.779000 audit[8201]: USER_END pid=8201 uid=0 auid=500 ses=82 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:48.780000 audit[8201]: CRED_DISP pid=8201 uid=0 auid=500 ses=82 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:48.817127 systemd[1]: sshd@80-10.0.0.61:22-10.0.0.1:52302.service: Deactivated successfully. Jan 28 01:36:48.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@80-10.0.0.61:22-10.0.0.1:52302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:36:48.859202 systemd[1]: session-82.scope: Deactivated successfully. Jan 28 01:36:48.888994 systemd-logind[1590]: Session 82 logged out. Waiting for processes to exit. Jan 28 01:36:48.937951 systemd-logind[1590]: Removed session 82. Jan 28 01:36:52.097355 kubelet[2995]: E0128 01:36:52.090856 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:36:53.831599 systemd[1]: Started sshd@81-10.0.0.61:22-10.0.0.1:55014.service - OpenSSH per-connection server daemon (10.0.0.1:55014). Jan 28 01:36:53.928355 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 28 01:36:53.930412 kernel: audit: type=1130 audit(1769564213.830:1448): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@81-10.0.0.61:22-10.0.0.1:55014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:36:53.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@81-10.0.0.61:22-10.0.0.1:55014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:36:54.087144 kubelet[2995]: E0128 01:36:54.085880 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:36:54.171444 containerd[1624]: time="2026-01-28T01:36:54.145710330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:36:54.395413 containerd[1624]: time="2026-01-28T01:36:54.395348828Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:36:54.420887 containerd[1624]: time="2026-01-28T01:36:54.420745342Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:36:54.427144 containerd[1624]: time="2026-01-28T01:36:54.421196412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 01:36:54.430117 kubelet[2995]: E0128 01:36:54.429757 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:36:54.430117 kubelet[2995]: E0128 01:36:54.429833 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:36:54.465923 kubelet[2995]: E0128 01:36:54.438467 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4880c63e305842ec869c2f1042d40e46,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5lwwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b57dd5bd-wgdx5_calico-system(fe814487-9d96-47c2-af16-f4ab9eb63844): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:36:54.530535 containerd[1624]: time="2026-01-28T01:36:54.520826527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:36:54.706000 audit[8249]: USER_ACCT pid=8249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:54.751350 sshd-session[8249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:36:54.768482 sshd[8249]: Accepted publickey for core from 10.0.0.1 port 55014 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:36:54.808884 kernel: audit: type=1101 audit(1769564214.706:1449): pid=8249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:54.809153 containerd[1624]: time="2026-01-28T01:36:54.807189496Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:36:54.724000 audit[8249]: CRED_ACQ pid=8249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:54.886404 kubelet[2995]: E0128 01:36:54.826372 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:36:54.886404 kubelet[2995]: E0128 01:36:54.826436 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:36:54.886404 kubelet[2995]: E0128 01:36:54.826731 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5lwwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b57dd5bd-wgdx5_calico-system(fe814487-9d96-47c2-af16-f4ab9eb63844): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:36:54.886404 kubelet[2995]: E0128 01:36:54.828638 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:36:54.887174 containerd[1624]: time="2026-01-28T01:36:54.825586324Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:36:54.887174 containerd[1624]: time="2026-01-28T01:36:54.825710806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 01:36:54.909185 systemd-logind[1590]: New session 83 of user core. Jan 28 01:36:54.985227 kernel: audit: type=1103 audit(1769564214.724:1450): pid=8249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:54.985448 kernel: audit: type=1006 audit(1769564214.724:1451): pid=8249 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=83 res=1 Jan 28 01:36:54.985500 kernel: audit: type=1300 audit(1769564214.724:1451): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4de252b0 a2=3 a3=0 items=0 ppid=1 pid=8249 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=83 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:54.724000 audit[8249]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4de252b0 a2=3 a3=0 items=0 ppid=1 pid=8249 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=83 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:36:55.016748 kernel: audit: type=1327 audit(1769564214.724:1451): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:36:54.724000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:36:55.044495 systemd[1]: Started session-83.scope - Session 83 of User core. Jan 28 01:36:55.082799 containerd[1624]: time="2026-01-28T01:36:55.081208014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:36:55.485805 kernel: audit: type=1105 audit(1769564215.397:1452): pid=8249 uid=0 auid=500 ses=83 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:55.397000 audit[8249]: USER_START pid=8249 uid=0 auid=500 ses=83 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:55.486827 containerd[1624]: time="2026-01-28T01:36:55.486521023Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:36:55.426000 audit[8253]: CRED_ACQ pid=8253 uid=0 auid=500 ses=83 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:55.599611 kernel: audit: type=1103 audit(1769564215.426:1453): pid=8253 uid=0 auid=500 ses=83 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:55.662430 containerd[1624]: time="2026-01-28T01:36:55.662161731Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:36:55.662430 containerd[1624]: time="2026-01-28T01:36:55.662374086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 01:36:55.670183 kubelet[2995]: E0128 01:36:55.669885 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:36:55.736352 kubelet[2995]: E0128 01:36:55.684608 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:36:55.736352 kubelet[2995]: E0128 01:36:55.708720 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4l6c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kgc2v_calico-system(845c6024-31b8-4f74-be49-c76c18f222f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:36:55.892221 containerd[1624]: time="2026-01-28T01:36:55.788972840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:36:56.038870 containerd[1624]: time="2026-01-28T01:36:56.037663023Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:36:56.063467 containerd[1624]: time="2026-01-28T01:36:56.060984352Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:36:56.063467 containerd[1624]: time="2026-01-28T01:36:56.061416807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 01:36:56.072510 kubelet[2995]: E0128 01:36:56.067681 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:36:56.072510 kubelet[2995]: E0128 01:36:56.067758 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:36:56.072510 kubelet[2995]: E0128 01:36:56.067937 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4l6c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kgc2v_calico-system(845c6024-31b8-4f74-be49-c76c18f222f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:36:56.076421 kubelet[2995]: E0128 01:36:56.074645 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:36:56.144812 kubelet[2995]: E0128 01:36:56.143475 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:36:56.312528 containerd[1624]: time="2026-01-28T01:36:56.302710660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:36:56.528457 containerd[1624]: time="2026-01-28T01:36:56.521696221Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:36:56.649120 containerd[1624]: time="2026-01-28T01:36:56.606534913Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:36:56.649120 containerd[1624]: time="2026-01-28T01:36:56.606752388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 01:36:56.655621 kubelet[2995]: E0128 01:36:56.607585 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:36:56.655621 kubelet[2995]: E0128 01:36:56.607655 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:36:56.655621 kubelet[2995]: E0128 01:36:56.608100 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptffd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-g77nq_calico-system(8c19397c-299f-4305-bb7b-810de8e940fe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:36:56.655621 kubelet[2995]: E0128 01:36:56.613121 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:36:56.980525 sshd[8253]: Connection closed by 10.0.0.1 port 55014 Jan 28 01:36:56.990632 sshd-session[8249]: pam_unix(sshd:session): session closed for user core Jan 28 01:36:57.023000 audit[8249]: USER_END pid=8249 uid=0 auid=500 ses=83 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:57.085949 kubelet[2995]: E0128 01:36:57.062481 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:57.086690 kernel: audit: type=1106 audit(1769564217.023:1454): pid=8249 uid=0 auid=500 ses=83 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:57.048233 systemd[1]: sshd@81-10.0.0.61:22-10.0.0.1:55014.service: Deactivated successfully. Jan 28 01:36:57.023000 audit[8249]: CRED_DISP pid=8249 uid=0 auid=500 ses=83 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:57.070888 systemd[1]: session-83.scope: Deactivated successfully. Jan 28 01:36:57.088462 systemd-logind[1590]: Session 83 logged out. Waiting for processes to exit. Jan 28 01:36:57.139217 systemd-logind[1590]: Removed session 83. Jan 28 01:36:57.164527 kernel: audit: type=1104 audit(1769564217.023:1455): pid=8249 uid=0 auid=500 ses=83 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:36:57.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@81-10.0.0.61:22-10.0.0.1:55014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:37:02.105968 systemd[1]: Started sshd@82-10.0.0.61:22-10.0.0.1:55016.service - OpenSSH per-connection server daemon (10.0.0.1:55016). Jan 28 01:37:02.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@82-10.0.0.61:22-10.0.0.1:55016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:37:02.138485 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:37:02.138664 kernel: audit: type=1130 audit(1769564222.110:1457): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@82-10.0.0.61:22-10.0.0.1:55016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:37:02.194127 kubelet[2995]: E0128 01:37:02.191898 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:37:02.792000 audit[8276]: USER_ACCT pid=8276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:02.836912 sshd-session[8276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:37:02.840096 sshd[8276]: Accepted publickey for core from 10.0.0.1 port 55016 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:37:02.962800 kernel: audit: type=1101 audit(1769564222.792:1458): pid=8276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:02.962885 kernel: audit: type=1103 audit(1769564222.825:1459): pid=8276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:02.825000 audit[8276]: CRED_ACQ pid=8276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:03.028747 systemd-logind[1590]: New session 84 of user core. Jan 28 01:37:03.203618 kernel: audit: type=1006 audit(1769564222.825:1460): pid=8276 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=84 res=1 Jan 28 01:37:03.203791 kernel: audit: type=1300 audit(1769564222.825:1460): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc18c9c5c0 a2=3 a3=0 items=0 ppid=1 pid=8276 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=84 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:02.825000 audit[8276]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc18c9c5c0 a2=3 a3=0 items=0 ppid=1 pid=8276 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=84 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:03.329638 kernel: audit: type=1327 audit(1769564222.825:1460): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:37:02.825000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:37:03.407388 systemd[1]: Started session-84.scope - Session 84 of User core. Jan 28 01:37:03.493000 audit[8276]: USER_START pid=8276 uid=0 auid=500 ses=84 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:03.619140 kernel: audit: type=1105 audit(1769564223.493:1461): pid=8276 uid=0 auid=500 ses=84 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:03.704866 kernel: audit: type=1103 audit(1769564223.517:1462): pid=8280 uid=0 auid=500 ses=84 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:03.517000 audit[8280]: CRED_ACQ pid=8280 uid=0 auid=500 ses=84 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:04.685867 sshd[8280]: Connection closed by 10.0.0.1 port 55016 Jan 28 01:37:04.684470 sshd-session[8276]: pam_unix(sshd:session): session closed for user core Jan 28 01:37:04.695459 systemd[1]: sshd@82-10.0.0.61:22-10.0.0.1:55016.service: Deactivated successfully. Jan 28 01:37:04.686000 audit[8276]: USER_END pid=8276 uid=0 auid=500 ses=84 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:04.708800 systemd[1]: session-84.scope: Deactivated successfully. Jan 28 01:37:04.722799 systemd-logind[1590]: Session 84 logged out. Waiting for processes to exit. Jan 28 01:37:04.735362 systemd-logind[1590]: Removed session 84. Jan 28 01:37:04.803642 kernel: audit: type=1106 audit(1769564224.686:1463): pid=8276 uid=0 auid=500 ses=84 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:04.803794 kernel: audit: type=1104 audit(1769564224.686:1464): pid=8276 uid=0 auid=500 ses=84 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:04.686000 audit[8276]: CRED_DISP pid=8276 uid=0 auid=500 ses=84 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:04.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@82-10.0.0.61:22-10.0.0.1:55016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:37:05.067569 containerd[1624]: time="2026-01-28T01:37:05.064987989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:37:05.284628 containerd[1624]: time="2026-01-28T01:37:05.284414250Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:37:05.316694 containerd[1624]: time="2026-01-28T01:37:05.315542538Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:37:05.316694 containerd[1624]: time="2026-01-28T01:37:05.315718756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:37:05.316926 kubelet[2995]: E0128 01:37:05.315988 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:37:05.316926 kubelet[2995]: E0128 01:37:05.316208 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:37:05.316926 kubelet[2995]: E0128 01:37:05.316440 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prlsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59889c77b-c5nrl_calico-apiserver(b1cfce8a-501a-4088-a990-12172f5320b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:37:05.330973 kubelet[2995]: E0128 01:37:05.318173 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:37:06.076124 kubelet[2995]: E0128 01:37:06.070373 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:37:08.095955 kubelet[2995]: E0128 01:37:08.094727 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:37:08.095955 kubelet[2995]: E0128 01:37:08.095412 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:37:08.155755 kubelet[2995]: E0128 01:37:08.155692 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:37:12.395561 systemd[1]: Started sshd@83-10.0.0.61:22-10.0.0.1:45406.service - OpenSSH per-connection server daemon (10.0.0.1:45406). Jan 28 01:37:12.505193 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:37:12.505453 kernel: audit: type=1130 audit(1769564232.394:1466): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@83-10.0.0.61:22-10.0.0.1:45406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:37:12.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@83-10.0.0.61:22-10.0.0.1:45406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:37:12.668577 kubelet[2995]: E0128 01:37:12.667971 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:37:13.120000 audit[8294]: USER_ACCT pid=8294 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:13.165425 kernel: audit: type=1101 audit(1769564233.120:1467): pid=8294 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:13.134632 sshd-session[8294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:37:13.166599 sshd[8294]: Accepted publickey for core from 10.0.0.1 port 45406 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:37:13.129000 audit[8294]: CRED_ACQ pid=8294 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:13.206377 kernel: audit: type=1103 audit(1769564233.129:1468): pid=8294 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:13.213865 systemd-logind[1590]: New session 85 of user core. Jan 28 01:37:13.221318 kernel: audit: type=1006 audit(1769564233.129:1469): pid=8294 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=85 res=1 Jan 28 01:37:13.129000 audit[8294]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd17a29ed0 a2=3 a3=0 items=0 ppid=1 pid=8294 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=85 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:13.248421 kernel: audit: type=1300 audit(1769564233.129:1469): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd17a29ed0 a2=3 a3=0 items=0 ppid=1 pid=8294 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=85 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:13.129000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:37:13.272850 kernel: audit: type=1327 audit(1769564233.129:1469): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:37:13.272950 systemd[1]: Started session-85.scope - Session 85 of User core. Jan 28 01:37:13.363000 audit[8294]: USER_START pid=8294 uid=0 auid=500 ses=85 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:13.436638 kernel: audit: type=1105 audit(1769564233.363:1470): pid=8294 uid=0 auid=500 ses=85 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:13.382000 audit[8298]: CRED_ACQ pid=8298 uid=0 auid=500 ses=85 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:13.545128 kernel: audit: type=1103 audit(1769564233.382:1471): pid=8298 uid=0 auid=500 ses=85 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:14.154734 sshd[8298]: Connection closed by 10.0.0.1 port 45406 Jan 28 01:37:14.164980 sshd-session[8294]: pam_unix(sshd:session): session closed for user core Jan 28 01:37:14.171000 audit[8294]: USER_END pid=8294 uid=0 auid=500 ses=85 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:14.205820 kernel: audit: type=1106 audit(1769564234.171:1472): pid=8294 uid=0 auid=500 ses=85 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:14.187000 audit[8294]: CRED_DISP pid=8294 uid=0 auid=500 ses=85 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:14.222903 systemd[1]: sshd@83-10.0.0.61:22-10.0.0.1:45406.service: Deactivated successfully. Jan 28 01:37:14.227484 systemd[1]: session-85.scope: Deactivated successfully. Jan 28 01:37:14.259973 kernel: audit: type=1104 audit(1769564234.187:1473): pid=8294 uid=0 auid=500 ses=85 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:14.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@83-10.0.0.61:22-10.0.0.1:45406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:37:14.278694 systemd-logind[1590]: Session 85 logged out. Waiting for processes to exit. Jan 28 01:37:14.314313 systemd-logind[1590]: Removed session 85. Jan 28 01:37:19.200407 kubelet[2995]: E0128 01:37:19.200135 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:37:19.329074 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:37:19.329238 kernel: audit: type=1130 audit(1769564239.295:1475): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@84-10.0.0.61:22-10.0.0.1:45796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:37:19.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@84-10.0.0.61:22-10.0.0.1:45796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:37:19.300619 systemd[1]: Started sshd@84-10.0.0.61:22-10.0.0.1:45796.service - OpenSSH per-connection server daemon (10.0.0.1:45796). Jan 28 01:37:19.827000 audit[8331]: USER_ACCT pid=8331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:19.845579 sshd[8331]: Accepted publickey for core from 10.0.0.1 port 45796 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:37:19.865108 kernel: audit: type=1101 audit(1769564239.827:1476): pid=8331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:19.875000 audit[8331]: CRED_ACQ pid=8331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:19.881396 sshd-session[8331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:37:19.922863 kernel: audit: type=1103 audit(1769564239.875:1477): pid=8331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:19.875000 audit[8331]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1d326fc0 a2=3 a3=0 items=0 ppid=1 pid=8331 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=86 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:19.962551 systemd-logind[1590]: New session 86 of user core. Jan 28 01:37:20.010123 kernel: audit: type=1006 audit(1769564239.875:1478): pid=8331 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=86 res=1 Jan 28 01:37:20.010362 kernel: audit: type=1300 audit(1769564239.875:1478): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1d326fc0 a2=3 a3=0 items=0 ppid=1 pid=8331 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=86 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:20.045175 kernel: audit: type=1327 audit(1769564239.875:1478): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:37:19.875000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:37:20.146586 systemd[1]: Started session-86.scope - Session 86 of User core. Jan 28 01:37:20.261000 audit[8331]: USER_START pid=8331 uid=0 auid=500 ses=86 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:20.317000 audit[8353]: CRED_ACQ pid=8353 uid=0 auid=500 ses=86 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:20.430587 kernel: audit: type=1105 audit(1769564240.261:1479): pid=8331 uid=0 auid=500 ses=86 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:20.430753 kernel: audit: type=1103 audit(1769564240.317:1480): pid=8353 uid=0 auid=500 ses=86 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:21.181971 containerd[1624]: time="2026-01-28T01:37:21.180126478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:37:21.182736 kubelet[2995]: E0128 01:37:21.180810 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:37:21.344457 containerd[1624]: time="2026-01-28T01:37:21.340662590Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:37:21.352586 containerd[1624]: time="2026-01-28T01:37:21.352523440Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:37:21.352850 containerd[1624]: time="2026-01-28T01:37:21.352818971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 01:37:21.356609 kubelet[2995]: E0128 01:37:21.353946 2995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:37:21.356609 kubelet[2995]: E0128 01:37:21.354109 2995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:37:21.356609 kubelet[2995]: E0128 01:37:21.354363 2995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfmt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d85994946-hw4kg_calico-system(2c2d5c47-2f4c-4dc2-af4d-d250680defb0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:37:21.356609 kubelet[2995]: E0128 01:37:21.356121 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:37:21.358586 sshd[8353]: Connection closed by 10.0.0.1 port 45796 Jan 28 01:37:21.370682 sshd-session[8331]: pam_unix(sshd:session): session closed for user core Jan 28 01:37:21.395000 audit[8331]: USER_END pid=8331 uid=0 auid=500 ses=86 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:21.410324 systemd[1]: sshd@84-10.0.0.61:22-10.0.0.1:45796.service: Deactivated successfully. Jan 28 01:37:21.436656 systemd[1]: session-86.scope: Deactivated successfully. Jan 28 01:37:21.395000 audit[8331]: CRED_DISP pid=8331 uid=0 auid=500 ses=86 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:21.459688 systemd-logind[1590]: Session 86 logged out. Waiting for processes to exit. Jan 28 01:37:21.473486 systemd-logind[1590]: Removed session 86. Jan 28 01:37:21.587615 kernel: audit: type=1106 audit(1769564241.395:1481): pid=8331 uid=0 auid=500 ses=86 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:21.587784 kernel: audit: type=1104 audit(1769564241.395:1482): pid=8331 uid=0 auid=500 ses=86 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:21.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@84-10.0.0.61:22-10.0.0.1:45796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:37:22.155320 kubelet[2995]: E0128 01:37:22.151705 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55" Jan 28 01:37:24.521199 kubelet[2995]: E0128 01:37:24.515891 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g77nq" podUID="8c19397c-299f-4305-bb7b-810de8e940fe" Jan 28 01:37:24.591783 kubelet[2995]: E0128 01:37:24.536518 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b57dd5bd-wgdx5" podUID="fe814487-9d96-47c2-af16-f4ab9eb63844" Jan 28 01:37:26.263964 kubelet[2995]: E0128 01:37:26.260683 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:37:26.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@85-10.0.0.61:22-10.0.0.1:53634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:37:26.487174 systemd[1]: Started sshd@85-10.0.0.61:22-10.0.0.1:53634.service - OpenSSH per-connection server daemon (10.0.0.1:53634). Jan 28 01:37:26.514566 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:37:26.514753 kernel: audit: type=1130 audit(1769564246.486:1484): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@85-10.0.0.61:22-10.0.0.1:53634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:37:27.287000 audit[8374]: USER_ACCT pid=8374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:27.295528 sshd[8374]: Accepted publickey for core from 10.0.0.1 port 53634 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:37:27.325214 sshd-session[8374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:37:27.287000 audit[8374]: CRED_ACQ pid=8374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:27.438389 systemd-logind[1590]: New session 87 of user core. Jan 28 01:37:27.487222 kernel: audit: type=1101 audit(1769564247.287:1485): pid=8374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:27.487434 kernel: audit: type=1103 audit(1769564247.287:1486): pid=8374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:27.287000 audit[8374]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdeb5a6cb0 a2=3 a3=0 items=0 ppid=1 pid=8374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=87 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:27.601964 kernel: audit: type=1006 audit(1769564247.287:1487): pid=8374 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=87 res=1 Jan 28 01:37:27.951501 kernel: audit: type=1300 audit(1769564247.287:1487): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdeb5a6cb0 a2=3 a3=0 items=0 ppid=1 pid=8374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=87 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:27.951730 kernel: audit: type=1327 audit(1769564247.287:1487): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:37:27.287000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:37:27.643724 systemd[1]: Started session-87.scope - Session 87 of User core. Jan 28 01:37:27.994000 audit[8374]: USER_START pid=8374 uid=0 auid=500 ses=87 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:28.077449 kernel: audit: type=1105 audit(1769564247.994:1488): pid=8374 uid=0 auid=500 ses=87 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:28.077716 kernel: audit: type=1103 audit(1769564248.068:1489): pid=8378 uid=0 auid=500 ses=87 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:28.068000 audit[8378]: CRED_ACQ pid=8378 uid=0 auid=500 ses=87 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:29.128077 sshd[8378]: Connection closed by 10.0.0.1 port 53634 Jan 28 01:37:29.128770 sshd-session[8374]: pam_unix(sshd:session): session closed for user core Jan 28 01:37:29.132000 audit[8374]: USER_END pid=8374 uid=0 auid=500 ses=87 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:29.155311 systemd[1]: sshd@85-10.0.0.61:22-10.0.0.1:53634.service: Deactivated successfully. Jan 28 01:37:29.160478 kernel: audit: type=1106 audit(1769564249.132:1490): pid=8374 uid=0 auid=500 ses=87 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:29.132000 audit[8374]: CRED_DISP pid=8374 uid=0 auid=500 ses=87 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:29.168905 systemd[1]: session-87.scope: Deactivated successfully. Jan 28 01:37:29.174702 systemd-logind[1590]: Session 87 logged out. Waiting for processes to exit. Jan 28 01:37:29.180498 kernel: audit: type=1104 audit(1769564249.132:1491): pid=8374 uid=0 auid=500 ses=87 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:29.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@85-10.0.0.61:22-10.0.0.1:53634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:37:29.198551 systemd-logind[1590]: Removed session 87. Jan 28 01:37:33.094797 kubelet[2995]: E0128 01:37:33.092130 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-c5nrl" podUID="b1cfce8a-501a-4088-a990-12172f5320b3" Jan 28 01:37:34.072437 kubelet[2995]: E0128 01:37:34.066900 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d85994946-hw4kg" podUID="2c2d5c47-2f4c-4dc2-af4d-d250680defb0" Jan 28 01:37:34.206630 systemd[1]: Started sshd@86-10.0.0.61:22-10.0.0.1:51916.service - OpenSSH per-connection server daemon (10.0.0.1:51916). Jan 28 01:37:34.277197 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:37:34.277434 kernel: audit: type=1130 audit(1769564254.205:1493): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@86-10.0.0.61:22-10.0.0.1:51916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:37:34.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@86-10.0.0.61:22-10.0.0.1:51916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:37:34.770000 audit[8393]: USER_ACCT pid=8393 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:34.805923 sshd-session[8393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:37:34.825992 sshd[8393]: Accepted publickey for core from 10.0.0.1 port 51916 ssh2: RSA SHA256:684N/8FzONEb1BNv8N6G4gdyD3D0uKJ/PaqTJ5R6c/E Jan 28 01:37:34.781000 audit[8393]: CRED_ACQ pid=8393 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:34.889898 kernel: audit: type=1101 audit(1769564254.770:1494): pid=8393 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:34.890087 kernel: audit: type=1103 audit(1769564254.781:1495): pid=8393 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:34.953503 kernel: audit: type=1006 audit(1769564254.781:1496): pid=8393 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=88 res=1 Jan 28 01:37:34.953645 kernel: audit: type=1300 audit(1769564254.781:1496): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdea985670 a2=3 a3=0 items=0 ppid=1 pid=8393 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=88 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:34.781000 audit[8393]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdea985670 a2=3 a3=0 items=0 ppid=1 pid=8393 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=88 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:37:34.982140 systemd-logind[1590]: New session 88 of user core. Jan 28 01:37:34.781000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:37:35.036643 kernel: audit: type=1327 audit(1769564254.781:1496): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:37:35.058883 systemd[1]: Started session-88.scope - Session 88 of User core. Jan 28 01:37:35.108084 kubelet[2995]: E0128 01:37:35.107398 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kgc2v" podUID="845c6024-31b8-4f74-be49-c76c18f222f2" Jan 28 01:37:35.122000 audit[8393]: USER_START pid=8393 uid=0 auid=500 ses=88 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:35.181740 kernel: audit: type=1105 audit(1769564255.122:1497): pid=8393 uid=0 auid=500 ses=88 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:35.153000 audit[8397]: CRED_ACQ pid=8397 uid=0 auid=500 ses=88 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:35.224478 kernel: audit: type=1103 audit(1769564255.153:1498): pid=8397 uid=0 auid=500 ses=88 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:35.763105 sshd[8397]: Connection closed by 10.0.0.1 port 51916 Jan 28 01:37:35.768000 audit[8393]: USER_END pid=8393 uid=0 auid=500 ses=88 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:35.770411 sshd-session[8393]: pam_unix(sshd:session): session closed for user core Jan 28 01:37:35.788110 systemd-logind[1590]: Session 88 logged out. Waiting for processes to exit. Jan 28 01:37:35.790787 systemd[1]: sshd@86-10.0.0.61:22-10.0.0.1:51916.service: Deactivated successfully. Jan 28 01:37:35.795483 systemd[1]: session-88.scope: Deactivated successfully. Jan 28 01:37:35.799991 systemd-logind[1590]: Removed session 88. Jan 28 01:37:35.848114 kernel: audit: type=1106 audit(1769564255.768:1499): pid=8393 uid=0 auid=500 ses=88 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:35.848816 kernel: audit: type=1104 audit(1769564255.768:1500): pid=8393 uid=0 auid=500 ses=88 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:35.768000 audit[8393]: CRED_DISP pid=8393 uid=0 auid=500 ses=88 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:37:35.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@86-10.0.0.61:22-10.0.0.1:51916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:37:37.078649 kubelet[2995]: E0128 01:37:37.076539 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59889c77b-9msjb" podUID="63a937f8-f218-45d1-87c6-b75ad5fcad55"