Dec 12 18:14:04.095392 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:17:57 -00 2025 Dec 12 18:14:04.095420 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 12 18:14:04.095432 kernel: BIOS-provided physical RAM map: Dec 12 18:14:04.095439 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 12 18:14:04.095445 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 12 18:14:04.095451 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 12 18:14:04.095459 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 12 18:14:04.095465 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 12 18:14:04.095473 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Dec 12 18:14:04.095479 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Dec 12 18:14:04.095486 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000007e73efff] usable Dec 12 18:14:04.095492 kernel: BIOS-e820: [mem 0x000000007e73f000-0x000000007e7fffff] reserved Dec 12 18:14:04.095498 kernel: BIOS-e820: [mem 0x000000007e800000-0x000000007ea70fff] usable Dec 12 18:14:04.095505 kernel: BIOS-e820: [mem 0x000000007ea71000-0x000000007eb84fff] reserved Dec 12 18:14:04.095514 kernel: BIOS-e820: [mem 0x000000007eb85000-0x000000007f6ecfff] usable Dec 12 18:14:04.095523 kernel: BIOS-e820: [mem 0x000000007f6ed000-0x000000007f96cfff] reserved Dec 12 18:14:04.095530 kernel: BIOS-e820: [mem 0x000000007f96d000-0x000000007f97efff] ACPI data Dec 12 18:14:04.095537 kernel: BIOS-e820: [mem 0x000000007f97f000-0x000000007f9fefff] ACPI NVS Dec 12 18:14:04.095543 kernel: BIOS-e820: [mem 0x000000007f9ff000-0x000000007fe4efff] usable Dec 12 18:14:04.095550 kernel: BIOS-e820: [mem 0x000000007fe4f000-0x000000007fe52fff] reserved Dec 12 18:14:04.095556 kernel: BIOS-e820: [mem 0x000000007fe53000-0x000000007fe54fff] ACPI NVS Dec 12 18:14:04.095563 kernel: BIOS-e820: [mem 0x000000007fe55000-0x000000007febbfff] usable Dec 12 18:14:04.095569 kernel: BIOS-e820: [mem 0x000000007febc000-0x000000007ff3ffff] reserved Dec 12 18:14:04.095578 kernel: BIOS-e820: [mem 0x000000007ff40000-0x000000007fffffff] ACPI NVS Dec 12 18:14:04.095584 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 12 18:14:04.095591 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 12 18:14:04.095597 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Dec 12 18:14:04.095604 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000047fffffff] usable Dec 12 18:14:04.095610 kernel: NX (Execute Disable) protection: active Dec 12 18:14:04.095617 kernel: APIC: Static calls initialized Dec 12 18:14:04.095624 kernel: e820: update [mem 0x7dd4e018-0x7dd57a57] usable ==> usable Dec 12 18:14:04.095630 kernel: e820: update [mem 0x7dd26018-0x7dd4d457] usable ==> usable Dec 12 18:14:04.095637 kernel: extended physical RAM map: Dec 12 18:14:04.095644 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 12 18:14:04.095652 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 12 18:14:04.095658 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 12 18:14:04.095665 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Dec 12 18:14:04.095671 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 12 18:14:04.095678 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Dec 12 18:14:04.095685 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Dec 12 18:14:04.095696 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000007dd26017] usable Dec 12 18:14:04.095703 kernel: reserve setup_data: [mem 0x000000007dd26018-0x000000007dd4d457] usable Dec 12 18:14:04.095710 kernel: reserve setup_data: [mem 0x000000007dd4d458-0x000000007dd4e017] usable Dec 12 18:14:04.095717 kernel: reserve setup_data: [mem 0x000000007dd4e018-0x000000007dd57a57] usable Dec 12 18:14:04.095724 kernel: reserve setup_data: [mem 0x000000007dd57a58-0x000000007e73efff] usable Dec 12 18:14:04.095730 kernel: reserve setup_data: [mem 0x000000007e73f000-0x000000007e7fffff] reserved Dec 12 18:14:04.095737 kernel: reserve setup_data: [mem 0x000000007e800000-0x000000007ea70fff] usable Dec 12 18:14:04.095746 kernel: reserve setup_data: [mem 0x000000007ea71000-0x000000007eb84fff] reserved Dec 12 18:14:04.095753 kernel: reserve setup_data: [mem 0x000000007eb85000-0x000000007f6ecfff] usable Dec 12 18:14:04.095760 kernel: reserve setup_data: [mem 0x000000007f6ed000-0x000000007f96cfff] reserved Dec 12 18:14:04.095767 kernel: reserve setup_data: [mem 0x000000007f96d000-0x000000007f97efff] ACPI data Dec 12 18:14:04.095774 kernel: reserve setup_data: [mem 0x000000007f97f000-0x000000007f9fefff] ACPI NVS Dec 12 18:14:04.095781 kernel: reserve setup_data: [mem 0x000000007f9ff000-0x000000007fe4efff] usable Dec 12 18:14:04.095788 kernel: reserve setup_data: [mem 0x000000007fe4f000-0x000000007fe52fff] reserved Dec 12 18:14:04.095795 kernel: reserve setup_data: [mem 0x000000007fe53000-0x000000007fe54fff] ACPI NVS Dec 12 18:14:04.095802 kernel: reserve setup_data: [mem 0x000000007fe55000-0x000000007febbfff] usable Dec 12 18:14:04.095809 kernel: reserve setup_data: [mem 0x000000007febc000-0x000000007ff3ffff] reserved Dec 12 18:14:04.095815 kernel: reserve setup_data: [mem 0x000000007ff40000-0x000000007fffffff] ACPI NVS Dec 12 18:14:04.095824 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 12 18:14:04.095831 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 12 18:14:04.095838 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Dec 12 18:14:04.095845 kernel: reserve setup_data: [mem 0x0000000100000000-0x000000047fffffff] usable Dec 12 18:14:04.095852 kernel: efi: EFI v2.7 by EDK II Dec 12 18:14:04.095859 kernel: efi: SMBIOS=0x7f772000 ACPI=0x7f97e000 ACPI 2.0=0x7f97e014 MEMATTR=0x7e282018 RNG=0x7f972018 Dec 12 18:14:04.095866 kernel: random: crng init done Dec 12 18:14:04.095873 kernel: efi: Remove mem152: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Dec 12 18:14:04.095881 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Dec 12 18:14:04.095887 kernel: secureboot: Secure boot disabled Dec 12 18:14:04.095894 kernel: SMBIOS 2.8 present. Dec 12 18:14:04.095903 kernel: DMI: STACKIT Cloud OpenStack Nova/Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Dec 12 18:14:04.095910 kernel: DMI: Memory slots populated: 1/1 Dec 12 18:14:04.095917 kernel: Hypervisor detected: KVM Dec 12 18:14:04.095924 kernel: last_pfn = 0x7febc max_arch_pfn = 0x10000000000 Dec 12 18:14:04.095931 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 12 18:14:04.095938 kernel: kvm-clock: using sched offset of 5807690260 cycles Dec 12 18:14:04.095946 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 18:14:04.095954 kernel: tsc: Detected 2294.608 MHz processor Dec 12 18:14:04.095962 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 18:14:04.095971 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 18:14:04.095978 kernel: last_pfn = 0x480000 max_arch_pfn = 0x10000000000 Dec 12 18:14:04.095985 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 12 18:14:04.095993 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 18:14:04.096000 kernel: last_pfn = 0x7febc max_arch_pfn = 0x10000000000 Dec 12 18:14:04.096007 kernel: Using GB pages for direct mapping Dec 12 18:14:04.096015 kernel: ACPI: Early table checksum verification disabled Dec 12 18:14:04.096023 kernel: ACPI: RSDP 0x000000007F97E014 000024 (v02 BOCHS ) Dec 12 18:14:04.096032 kernel: ACPI: XSDT 0x000000007F97D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Dec 12 18:14:04.096040 kernel: ACPI: FACP 0x000000007F977000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:14:04.096047 kernel: ACPI: DSDT 0x000000007F978000 004441 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:14:04.096054 kernel: ACPI: FACS 0x000000007F9DD000 000040 Dec 12 18:14:04.096061 kernel: ACPI: APIC 0x000000007F976000 0000B0 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:14:04.096069 kernel: ACPI: MCFG 0x000000007F975000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:14:04.096076 kernel: ACPI: WAET 0x000000007F974000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:14:04.096086 kernel: ACPI: BGRT 0x000000007F973000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 12 18:14:04.096093 kernel: ACPI: Reserving FACP table memory at [mem 0x7f977000-0x7f9770f3] Dec 12 18:14:04.096101 kernel: ACPI: Reserving DSDT table memory at [mem 0x7f978000-0x7f97c440] Dec 12 18:14:04.096108 kernel: ACPI: Reserving FACS table memory at [mem 0x7f9dd000-0x7f9dd03f] Dec 12 18:14:04.096115 kernel: ACPI: Reserving APIC table memory at [mem 0x7f976000-0x7f9760af] Dec 12 18:14:04.096122 kernel: ACPI: Reserving MCFG table memory at [mem 0x7f975000-0x7f97503b] Dec 12 18:14:04.096130 kernel: ACPI: Reserving WAET table memory at [mem 0x7f974000-0x7f974027] Dec 12 18:14:04.096139 kernel: ACPI: Reserving BGRT table memory at [mem 0x7f973000-0x7f973037] Dec 12 18:14:04.096146 kernel: No NUMA configuration found Dec 12 18:14:04.096154 kernel: Faking a node at [mem 0x0000000000000000-0x000000047fffffff] Dec 12 18:14:04.096161 kernel: NODE_DATA(0) allocated [mem 0x47fff8dc0-0x47fffffff] Dec 12 18:14:04.096169 kernel: Zone ranges: Dec 12 18:14:04.096176 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 18:14:04.096218 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 12 18:14:04.096227 kernel: Normal [mem 0x0000000100000000-0x000000047fffffff] Dec 12 18:14:04.096234 kernel: Device empty Dec 12 18:14:04.096242 kernel: Movable zone start for each node Dec 12 18:14:04.096250 kernel: Early memory node ranges Dec 12 18:14:04.096257 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 12 18:14:04.096264 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 12 18:14:04.096272 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 12 18:14:04.096279 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Dec 12 18:14:04.096288 kernel: node 0: [mem 0x0000000000900000-0x000000007e73efff] Dec 12 18:14:04.096296 kernel: node 0: [mem 0x000000007e800000-0x000000007ea70fff] Dec 12 18:14:04.096303 kernel: node 0: [mem 0x000000007eb85000-0x000000007f6ecfff] Dec 12 18:14:04.096317 kernel: node 0: [mem 0x000000007f9ff000-0x000000007fe4efff] Dec 12 18:14:04.096327 kernel: node 0: [mem 0x000000007fe55000-0x000000007febbfff] Dec 12 18:14:04.096335 kernel: node 0: [mem 0x0000000100000000-0x000000047fffffff] Dec 12 18:14:04.096342 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000047fffffff] Dec 12 18:14:04.096351 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:14:04.096359 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 12 18:14:04.096367 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 12 18:14:04.096377 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:14:04.096384 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Dec 12 18:14:04.096393 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Dec 12 18:14:04.096401 kernel: On node 0, zone DMA32: 276 pages in unavailable ranges Dec 12 18:14:04.096411 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 12 18:14:04.096419 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Dec 12 18:14:04.096427 kernel: On node 0, zone Normal: 324 pages in unavailable ranges Dec 12 18:14:04.096435 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 12 18:14:04.096443 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 12 18:14:04.096452 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 12 18:14:04.096460 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 12 18:14:04.096470 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 12 18:14:04.096478 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 18:14:04.096486 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 12 18:14:04.096495 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 12 18:14:04.096503 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 18:14:04.096511 kernel: TSC deadline timer available Dec 12 18:14:04.096519 kernel: CPU topo: Max. logical packages: 8 Dec 12 18:14:04.096528 kernel: CPU topo: Max. logical dies: 8 Dec 12 18:14:04.096536 kernel: CPU topo: Max. dies per package: 1 Dec 12 18:14:04.096544 kernel: CPU topo: Max. threads per core: 1 Dec 12 18:14:04.096553 kernel: CPU topo: Num. cores per package: 1 Dec 12 18:14:04.096560 kernel: CPU topo: Num. threads per package: 1 Dec 12 18:14:04.096568 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs Dec 12 18:14:04.096576 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 12 18:14:04.096585 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 12 18:14:04.096594 kernel: kvm-guest: setup PV sched yield Dec 12 18:14:04.096603 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 12 18:14:04.096611 kernel: Booting paravirtualized kernel on KVM Dec 12 18:14:04.096619 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 18:14:04.096627 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1 Dec 12 18:14:04.096635 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Dec 12 18:14:04.096643 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Dec 12 18:14:04.096653 kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 Dec 12 18:14:04.096661 kernel: kvm-guest: PV spinlocks enabled Dec 12 18:14:04.096669 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 12 18:14:04.096679 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 12 18:14:04.096687 kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Dec 12 18:14:04.096695 kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 12 18:14:04.096705 kernel: Fallback order for Node 0: 0 Dec 12 18:14:04.096713 kernel: Built 1 zonelists, mobility grouping on. Total pages: 4192374 Dec 12 18:14:04.096721 kernel: Policy zone: Normal Dec 12 18:14:04.096730 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 18:14:04.096738 kernel: software IO TLB: area num 8. Dec 12 18:14:04.096746 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1 Dec 12 18:14:04.096755 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 18:14:04.096763 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 18:14:04.096773 kernel: Dynamic Preempt: voluntary Dec 12 18:14:04.096781 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 18:14:04.096790 kernel: rcu: RCU event tracing is enabled. Dec 12 18:14:04.096799 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=8. Dec 12 18:14:04.096807 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 18:14:04.096815 kernel: Rude variant of Tasks RCU enabled. Dec 12 18:14:04.096823 kernel: Tracing variant of Tasks RCU enabled. Dec 12 18:14:04.096832 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 18:14:04.096840 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8 Dec 12 18:14:04.096849 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8. Dec 12 18:14:04.096857 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8. Dec 12 18:14:04.096865 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8. Dec 12 18:14:04.096873 kernel: NR_IRQS: 33024, nr_irqs: 488, preallocated irqs: 16 Dec 12 18:14:04.096881 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 18:14:04.096891 kernel: Console: colour dummy device 80x25 Dec 12 18:14:04.096899 kernel: printk: legacy console [tty0] enabled Dec 12 18:14:04.096907 kernel: printk: legacy console [ttyS0] enabled Dec 12 18:14:04.096916 kernel: ACPI: Core revision 20240827 Dec 12 18:14:04.096924 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 18:14:04.096932 kernel: x2apic enabled Dec 12 18:14:04.096940 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 18:14:04.096948 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 12 18:14:04.096958 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 12 18:14:04.096966 kernel: kvm-guest: setup PV IPIs Dec 12 18:14:04.096974 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Dec 12 18:14:04.096982 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Dec 12 18:14:04.096991 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 12 18:14:04.096999 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 12 18:14:04.097006 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 12 18:14:04.097015 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 18:14:04.097023 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Dec 12 18:14:04.097030 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Dec 12 18:14:04.097038 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Dec 12 18:14:04.097046 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 12 18:14:04.097053 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 12 18:14:04.097061 kernel: TAA: Mitigation: Clear CPU buffers Dec 12 18:14:04.097068 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Dec 12 18:14:04.097076 kernel: active return thunk: its_return_thunk Dec 12 18:14:04.097084 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 12 18:14:04.097093 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 18:14:04.097101 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 18:14:04.097108 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 18:14:04.097116 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 12 18:14:04.097124 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 12 18:14:04.097131 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 12 18:14:04.097139 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 12 18:14:04.097146 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 18:14:04.097154 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 12 18:14:04.097162 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 12 18:14:04.097171 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 12 18:14:04.097178 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Dec 12 18:14:04.097193 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Dec 12 18:14:04.097200 kernel: Freeing SMP alternatives memory: 32K Dec 12 18:14:04.097208 kernel: pid_max: default: 32768 minimum: 301 Dec 12 18:14:04.097216 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 18:14:04.097223 kernel: landlock: Up and running. Dec 12 18:14:04.097231 kernel: SELinux: Initializing. Dec 12 18:14:04.097238 kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 18:14:04.097246 kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 18:14:04.097253 kernel: smpboot: CPU0: Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz (family: 0x6, model: 0x6a, stepping: 0x6) Dec 12 18:14:04.097263 kernel: Performance Events: PEBS fmt0-, Icelake events, full-width counters, Intel PMU driver. Dec 12 18:14:04.097271 kernel: ... version: 2 Dec 12 18:14:04.097280 kernel: ... bit width: 48 Dec 12 18:14:04.097288 kernel: ... generic registers: 8 Dec 12 18:14:04.097296 kernel: ... value mask: 0000ffffffffffff Dec 12 18:14:04.097304 kernel: ... max period: 00007fffffffffff Dec 12 18:14:04.097312 kernel: ... fixed-purpose events: 3 Dec 12 18:14:04.097321 kernel: ... event mask: 00000007000000ff Dec 12 18:14:04.097329 kernel: signal: max sigframe size: 3632 Dec 12 18:14:04.097337 kernel: rcu: Hierarchical SRCU implementation. Dec 12 18:14:04.097346 kernel: rcu: Max phase no-delay instances is 400. Dec 12 18:14:04.097354 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 18:14:04.097362 kernel: smp: Bringing up secondary CPUs ... Dec 12 18:14:04.097370 kernel: smpboot: x86: Booting SMP configuration: Dec 12 18:14:04.097378 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 Dec 12 18:14:04.097388 kernel: smp: Brought up 1 node, 8 CPUs Dec 12 18:14:04.097396 kernel: smpboot: Total of 8 processors activated (36713.72 BogoMIPS) Dec 12 18:14:04.097404 kernel: Memory: 16335312K/16769496K available (14336K kernel code, 2444K rwdata, 29892K rodata, 15464K init, 2576K bss, 426624K reserved, 0K cma-reserved) Dec 12 18:14:04.097413 kernel: devtmpfs: initialized Dec 12 18:14:04.097421 kernel: x86/mm: Memory block size: 128MB Dec 12 18:14:04.097429 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 12 18:14:04.097437 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 12 18:14:04.097446 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Dec 12 18:14:04.097455 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7f97f000-0x7f9fefff] (524288 bytes) Dec 12 18:14:04.097463 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fe53000-0x7fe54fff] (8192 bytes) Dec 12 18:14:04.097470 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7ff40000-0x7fffffff] (786432 bytes) Dec 12 18:14:04.097479 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 18:14:04.097487 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear) Dec 12 18:14:04.097495 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 18:14:04.097505 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 18:14:04.097513 kernel: audit: initializing netlink subsys (disabled) Dec 12 18:14:04.097521 kernel: audit: type=2000 audit(1765563241.255:1): state=initialized audit_enabled=0 res=1 Dec 12 18:14:04.097529 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 18:14:04.097537 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 18:14:04.097545 kernel: cpuidle: using governor menu Dec 12 18:14:04.097553 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 18:14:04.097562 kernel: dca service started, version 1.12.1 Dec 12 18:14:04.097570 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Dec 12 18:14:04.097578 kernel: PCI: Using configuration type 1 for base access Dec 12 18:14:04.097586 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 18:14:04.097594 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 18:14:04.097603 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 18:14:04.097611 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 18:14:04.097620 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 18:14:04.097628 kernel: ACPI: Added _OSI(Module Device) Dec 12 18:14:04.097636 kernel: ACPI: Added _OSI(Processor Device) Dec 12 18:14:04.097644 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 18:14:04.097651 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 18:14:04.097659 kernel: ACPI: Interpreter enabled Dec 12 18:14:04.097667 kernel: ACPI: PM: (supports S0 S3 S5) Dec 12 18:14:04.097677 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 18:14:04.097685 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 18:14:04.097693 kernel: PCI: Using E820 reservations for host bridge windows Dec 12 18:14:04.097701 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 12 18:14:04.097709 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 18:14:04.097885 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 18:14:04.097992 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 12 18:14:04.098089 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 12 18:14:04.098099 kernel: PCI host bridge to bus 0000:00 Dec 12 18:14:04.098219 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 12 18:14:04.098310 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 12 18:14:04.098397 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 12 18:14:04.098487 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Dec 12 18:14:04.098574 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Dec 12 18:14:04.098659 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x38e800003fff window] Dec 12 18:14:04.098745 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 18:14:04.098859 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 12 18:14:04.098968 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Dec 12 18:14:04.099070 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80000000-0x807fffff pref] Dec 12 18:14:04.099171 kernel: pci 0000:00:01.0: BAR 2 [mem 0x38e800000000-0x38e800003fff 64bit pref] Dec 12 18:14:04.099276 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8439e000-0x8439efff] Dec 12 18:14:04.099373 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 12 18:14:04.099470 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 12 18:14:04.099583 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.099682 kernel: pci 0000:00:02.0: BAR 0 [mem 0x8439d000-0x8439dfff] Dec 12 18:14:04.099778 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 12 18:14:04.099884 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Dec 12 18:14:04.099983 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Dec 12 18:14:04.100082 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Dec 12 18:14:04.100209 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.100311 kernel: pci 0000:00:02.1: BAR 0 [mem 0x8439c000-0x8439cfff] Dec 12 18:14:04.100409 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 12 18:14:04.100503 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Dec 12 18:14:04.100600 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Dec 12 18:14:04.100723 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.100824 kernel: pci 0000:00:02.2: BAR 0 [mem 0x8439b000-0x8439bfff] Dec 12 18:14:04.100919 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 12 18:14:04.101015 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Dec 12 18:14:04.101111 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Dec 12 18:14:04.101229 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.101333 kernel: pci 0000:00:02.3: BAR 0 [mem 0x8439a000-0x8439afff] Dec 12 18:14:04.101432 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 12 18:14:04.101533 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Dec 12 18:14:04.101630 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Dec 12 18:14:04.101735 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.101835 kernel: pci 0000:00:02.4: BAR 0 [mem 0x84399000-0x84399fff] Dec 12 18:14:04.101936 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 12 18:14:04.102033 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Dec 12 18:14:04.102133 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Dec 12 18:14:04.102245 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.102344 kernel: pci 0000:00:02.5: BAR 0 [mem 0x84398000-0x84398fff] Dec 12 18:14:04.102450 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 12 18:14:04.102550 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Dec 12 18:14:04.102645 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Dec 12 18:14:04.102750 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.102848 kernel: pci 0000:00:02.6: BAR 0 [mem 0x84397000-0x84397fff] Dec 12 18:14:04.102946 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 12 18:14:04.103045 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Dec 12 18:14:04.103140 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Dec 12 18:14:04.103255 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.103353 kernel: pci 0000:00:02.7: BAR 0 [mem 0x84396000-0x84396fff] Dec 12 18:14:04.103450 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 12 18:14:04.103545 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Dec 12 18:14:04.103646 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Dec 12 18:14:04.103759 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.103906 kernel: pci 0000:00:03.0: BAR 0 [mem 0x84395000-0x84395fff] Dec 12 18:14:04.104006 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Dec 12 18:14:04.104102 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Dec 12 18:14:04.104215 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Dec 12 18:14:04.104327 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.104438 kernel: pci 0000:00:03.1: BAR 0 [mem 0x84394000-0x84394fff] Dec 12 18:14:04.104536 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Dec 12 18:14:04.104632 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Dec 12 18:14:04.104728 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Dec 12 18:14:04.104836 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.104936 kernel: pci 0000:00:03.2: BAR 0 [mem 0x84393000-0x84393fff] Dec 12 18:14:04.105031 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Dec 12 18:14:04.105125 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Dec 12 18:14:04.105237 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Dec 12 18:14:04.105349 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.105445 kernel: pci 0000:00:03.3: BAR 0 [mem 0x84392000-0x84392fff] Dec 12 18:14:04.105547 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Dec 12 18:14:04.105643 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Dec 12 18:14:04.105737 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Dec 12 18:14:04.105842 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.105940 kernel: pci 0000:00:03.4: BAR 0 [mem 0x84391000-0x84391fff] Dec 12 18:14:04.106034 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Dec 12 18:14:04.106129 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Dec 12 18:14:04.106235 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Dec 12 18:14:04.106338 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.106437 kernel: pci 0000:00:03.5: BAR 0 [mem 0x84390000-0x84390fff] Dec 12 18:14:04.106533 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Dec 12 18:14:04.106627 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Dec 12 18:14:04.106723 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Dec 12 18:14:04.106825 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.106921 kernel: pci 0000:00:03.6: BAR 0 [mem 0x8438f000-0x8438ffff] Dec 12 18:14:04.107018 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Dec 12 18:14:04.107113 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Dec 12 18:14:04.107216 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Dec 12 18:14:04.107318 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.107414 kernel: pci 0000:00:03.7: BAR 0 [mem 0x8438e000-0x8438efff] Dec 12 18:14:04.107509 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Dec 12 18:14:04.107606 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Dec 12 18:14:04.107702 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Dec 12 18:14:04.107802 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.107898 kernel: pci 0000:00:04.0: BAR 0 [mem 0x8438d000-0x8438dfff] Dec 12 18:14:04.107993 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Dec 12 18:14:04.108091 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Dec 12 18:14:04.108252 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Dec 12 18:14:04.108388 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.108513 kernel: pci 0000:00:04.1: BAR 0 [mem 0x8438c000-0x8438cfff] Dec 12 18:14:04.108639 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Dec 12 18:14:04.108767 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Dec 12 18:14:04.108891 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Dec 12 18:14:04.109027 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.109145 kernel: pci 0000:00:04.2: BAR 0 [mem 0x8438b000-0x8438bfff] Dec 12 18:14:04.109266 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Dec 12 18:14:04.109433 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Dec 12 18:14:04.109532 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Dec 12 18:14:04.109641 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.109741 kernel: pci 0000:00:04.3: BAR 0 [mem 0x8438a000-0x8438afff] Dec 12 18:14:04.109837 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Dec 12 18:14:04.109932 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Dec 12 18:14:04.110027 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Dec 12 18:14:04.110129 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.110240 kernel: pci 0000:00:04.4: BAR 0 [mem 0x84389000-0x84389fff] Dec 12 18:14:04.110342 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Dec 12 18:14:04.110436 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Dec 12 18:14:04.110531 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Dec 12 18:14:04.110634 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.110731 kernel: pci 0000:00:04.5: BAR 0 [mem 0x84388000-0x84388fff] Dec 12 18:14:04.110826 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Dec 12 18:14:04.110925 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Dec 12 18:14:04.111021 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Dec 12 18:14:04.111125 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.111239 kernel: pci 0000:00:04.6: BAR 0 [mem 0x84387000-0x84387fff] Dec 12 18:14:04.111337 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Dec 12 18:14:04.111433 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Dec 12 18:14:04.111530 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Dec 12 18:14:04.111632 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.111729 kernel: pci 0000:00:04.7: BAR 0 [mem 0x84386000-0x84386fff] Dec 12 18:14:04.111835 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Dec 12 18:14:04.111940 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Dec 12 18:14:04.112037 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Dec 12 18:14:04.112141 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.112288 kernel: pci 0000:00:05.0: BAR 0 [mem 0x84385000-0x84385fff] Dec 12 18:14:04.112387 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Dec 12 18:14:04.112486 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Dec 12 18:14:04.112581 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Dec 12 18:14:04.112684 kernel: pci 0000:00:05.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.112781 kernel: pci 0000:00:05.1: BAR 0 [mem 0x84384000-0x84384fff] Dec 12 18:14:04.112877 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Dec 12 18:14:04.112974 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Dec 12 18:14:04.113073 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Dec 12 18:14:04.113175 kernel: pci 0000:00:05.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.113282 kernel: pci 0000:00:05.2: BAR 0 [mem 0x84383000-0x84383fff] Dec 12 18:14:04.113377 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Dec 12 18:14:04.113473 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Dec 12 18:14:04.113568 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Dec 12 18:14:04.113674 kernel: pci 0000:00:05.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.113770 kernel: pci 0000:00:05.3: BAR 0 [mem 0x84382000-0x84382fff] Dec 12 18:14:04.113867 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Dec 12 18:14:04.113964 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Dec 12 18:14:04.114060 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Dec 12 18:14:04.114168 kernel: pci 0000:00:05.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:14:04.114276 kernel: pci 0000:00:05.4: BAR 0 [mem 0x84381000-0x84381fff] Dec 12 18:14:04.114372 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Dec 12 18:14:04.114491 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Dec 12 18:14:04.114610 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Dec 12 18:14:04.114725 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 12 18:14:04.114858 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 12 18:14:04.114965 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 12 18:14:04.115065 kernel: pci 0000:00:1f.2: BAR 4 [io 0x7040-0x705f] Dec 12 18:14:04.115161 kernel: pci 0000:00:1f.2: BAR 5 [mem 0x84380000-0x84380fff] Dec 12 18:14:04.115272 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 12 18:14:04.115369 kernel: pci 0000:00:1f.3: BAR 4 [io 0x7000-0x703f] Dec 12 18:14:04.115479 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Dec 12 18:14:04.115578 kernel: pci 0000:01:00.0: BAR 0 [mem 0x84200000-0x842000ff 64bit] Dec 12 18:14:04.115676 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 12 18:14:04.115778 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Dec 12 18:14:04.115877 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Dec 12 18:14:04.115975 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Dec 12 18:14:04.116089 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 12 18:14:04.116216 kernel: pci_bus 0000:02: extended config space not accessible Dec 12 18:14:04.116230 kernel: acpiphp: Slot [1] registered Dec 12 18:14:04.116240 kernel: acpiphp: Slot [0] registered Dec 12 18:14:04.116249 kernel: acpiphp: Slot [2] registered Dec 12 18:14:04.116257 kernel: acpiphp: Slot [3] registered Dec 12 18:14:04.116269 kernel: acpiphp: Slot [4] registered Dec 12 18:14:04.116277 kernel: acpiphp: Slot [5] registered Dec 12 18:14:04.116286 kernel: acpiphp: Slot [6] registered Dec 12 18:14:04.116294 kernel: acpiphp: Slot [7] registered Dec 12 18:14:04.116302 kernel: acpiphp: Slot [8] registered Dec 12 18:14:04.116311 kernel: acpiphp: Slot [9] registered Dec 12 18:14:04.116320 kernel: acpiphp: Slot [10] registered Dec 12 18:14:04.116328 kernel: acpiphp: Slot [11] registered Dec 12 18:14:04.116339 kernel: acpiphp: Slot [12] registered Dec 12 18:14:04.116348 kernel: acpiphp: Slot [13] registered Dec 12 18:14:04.116356 kernel: acpiphp: Slot [14] registered Dec 12 18:14:04.116364 kernel: acpiphp: Slot [15] registered Dec 12 18:14:04.116373 kernel: acpiphp: Slot [16] registered Dec 12 18:14:04.116381 kernel: acpiphp: Slot [17] registered Dec 12 18:14:04.116390 kernel: acpiphp: Slot [18] registered Dec 12 18:14:04.116692 kernel: acpiphp: Slot [19] registered Dec 12 18:14:04.116702 kernel: acpiphp: Slot [20] registered Dec 12 18:14:04.116711 kernel: acpiphp: Slot [21] registered Dec 12 18:14:04.116720 kernel: acpiphp: Slot [22] registered Dec 12 18:14:04.116728 kernel: acpiphp: Slot [23] registered Dec 12 18:14:04.116737 kernel: acpiphp: Slot [24] registered Dec 12 18:14:04.116745 kernel: acpiphp: Slot [25] registered Dec 12 18:14:04.116754 kernel: acpiphp: Slot [26] registered Dec 12 18:14:04.116764 kernel: acpiphp: Slot [27] registered Dec 12 18:14:04.116773 kernel: acpiphp: Slot [28] registered Dec 12 18:14:04.116782 kernel: acpiphp: Slot [29] registered Dec 12 18:14:04.116790 kernel: acpiphp: Slot [30] registered Dec 12 18:14:04.116799 kernel: acpiphp: Slot [31] registered Dec 12 18:14:04.116938 kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Dec 12 18:14:04.117044 kernel: pci 0000:02:01.0: BAR 4 [io 0x6000-0x601f] Dec 12 18:14:04.117145 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 12 18:14:04.117157 kernel: acpiphp: Slot [0-2] registered Dec 12 18:14:04.117273 kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Dec 12 18:14:04.117372 kernel: pci 0000:03:00.0: BAR 1 [mem 0x83e00000-0x83e00fff] Dec 12 18:14:04.117472 kernel: pci 0000:03:00.0: BAR 4 [mem 0x380800000000-0x380800003fff 64bit pref] Dec 12 18:14:04.117573 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref] Dec 12 18:14:04.117674 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 12 18:14:04.117685 kernel: acpiphp: Slot [0-3] registered Dec 12 18:14:04.117795 kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Dec 12 18:14:04.117894 kernel: pci 0000:04:00.0: BAR 1 [mem 0x83c00000-0x83c00fff] Dec 12 18:14:04.117992 kernel: pci 0000:04:00.0: BAR 4 [mem 0x381000000000-0x381000003fff 64bit pref] Dec 12 18:14:04.118092 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 12 18:14:04.118106 kernel: acpiphp: Slot [0-4] registered Dec 12 18:14:04.118227 kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Dec 12 18:14:04.118328 kernel: pci 0000:05:00.0: BAR 4 [mem 0x381800000000-0x381800003fff 64bit pref] Dec 12 18:14:04.118432 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 12 18:14:04.118445 kernel: acpiphp: Slot [0-5] registered Dec 12 18:14:04.118552 kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Dec 12 18:14:04.118653 kernel: pci 0000:06:00.0: BAR 1 [mem 0x83800000-0x83800fff] Dec 12 18:14:04.118751 kernel: pci 0000:06:00.0: BAR 4 [mem 0x382000000000-0x382000003fff 64bit pref] Dec 12 18:14:04.118850 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 12 18:14:04.118861 kernel: acpiphp: Slot [0-6] registered Dec 12 18:14:04.118963 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 12 18:14:04.118976 kernel: acpiphp: Slot [0-7] registered Dec 12 18:14:04.119078 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 12 18:14:04.119090 kernel: acpiphp: Slot [0-8] registered Dec 12 18:14:04.119196 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 12 18:14:04.119207 kernel: acpiphp: Slot [0-9] registered Dec 12 18:14:04.119309 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Dec 12 18:14:04.119320 kernel: acpiphp: Slot [0-10] registered Dec 12 18:14:04.119421 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Dec 12 18:14:04.119432 kernel: acpiphp: Slot [0-11] registered Dec 12 18:14:04.119529 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Dec 12 18:14:04.119540 kernel: acpiphp: Slot [0-12] registered Dec 12 18:14:04.119638 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Dec 12 18:14:04.119650 kernel: acpiphp: Slot [0-13] registered Dec 12 18:14:04.119752 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Dec 12 18:14:04.119763 kernel: acpiphp: Slot [0-14] registered Dec 12 18:14:04.119861 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Dec 12 18:14:04.119873 kernel: acpiphp: Slot [0-15] registered Dec 12 18:14:04.119970 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Dec 12 18:14:04.119981 kernel: acpiphp: Slot [0-16] registered Dec 12 18:14:04.120078 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Dec 12 18:14:04.120091 kernel: acpiphp: Slot [0-17] registered Dec 12 18:14:04.120206 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Dec 12 18:14:04.120221 kernel: acpiphp: Slot [0-18] registered Dec 12 18:14:04.120354 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Dec 12 18:14:04.120367 kernel: acpiphp: Slot [0-19] registered Dec 12 18:14:04.120463 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Dec 12 18:14:04.120478 kernel: acpiphp: Slot [0-20] registered Dec 12 18:14:04.120576 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Dec 12 18:14:04.120588 kernel: acpiphp: Slot [0-21] registered Dec 12 18:14:04.120685 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Dec 12 18:14:04.120696 kernel: acpiphp: Slot [0-22] registered Dec 12 18:14:04.120805 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Dec 12 18:14:04.120819 kernel: acpiphp: Slot [0-23] registered Dec 12 18:14:04.120918 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Dec 12 18:14:04.120930 kernel: acpiphp: Slot [0-24] registered Dec 12 18:14:04.121027 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Dec 12 18:14:04.121038 kernel: acpiphp: Slot [0-25] registered Dec 12 18:14:04.121136 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Dec 12 18:14:04.121147 kernel: acpiphp: Slot [0-26] registered Dec 12 18:14:04.121254 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Dec 12 18:14:04.121266 kernel: acpiphp: Slot [0-27] registered Dec 12 18:14:04.121362 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Dec 12 18:14:04.121373 kernel: acpiphp: Slot [0-28] registered Dec 12 18:14:04.121472 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Dec 12 18:14:04.121484 kernel: acpiphp: Slot [0-29] registered Dec 12 18:14:04.121583 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Dec 12 18:14:04.121595 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 12 18:14:04.121604 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 12 18:14:04.121613 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 12 18:14:04.121622 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 12 18:14:04.121630 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 12 18:14:04.121639 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 12 18:14:04.121650 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 12 18:14:04.121658 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 12 18:14:04.121666 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 12 18:14:04.121675 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 12 18:14:04.121684 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 12 18:14:04.121692 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 12 18:14:04.121701 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 12 18:14:04.121711 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 12 18:14:04.121719 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 12 18:14:04.121728 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 12 18:14:04.121737 kernel: iommu: Default domain type: Translated Dec 12 18:14:04.121745 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 18:14:04.121754 kernel: efivars: Registered efivars operations Dec 12 18:14:04.121763 kernel: PCI: Using ACPI for IRQ routing Dec 12 18:14:04.121773 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 12 18:14:04.121782 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 12 18:14:04.121790 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Dec 12 18:14:04.121798 kernel: e820: reserve RAM buffer [mem 0x7dd26018-0x7fffffff] Dec 12 18:14:04.121807 kernel: e820: reserve RAM buffer [mem 0x7dd4e018-0x7fffffff] Dec 12 18:14:04.121815 kernel: e820: reserve RAM buffer [mem 0x7e73f000-0x7fffffff] Dec 12 18:14:04.121823 kernel: e820: reserve RAM buffer [mem 0x7ea71000-0x7fffffff] Dec 12 18:14:04.121833 kernel: e820: reserve RAM buffer [mem 0x7f6ed000-0x7fffffff] Dec 12 18:14:04.121842 kernel: e820: reserve RAM buffer [mem 0x7fe4f000-0x7fffffff] Dec 12 18:14:04.121850 kernel: e820: reserve RAM buffer [mem 0x7febc000-0x7fffffff] Dec 12 18:14:04.121951 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 12 18:14:04.122049 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 12 18:14:04.122144 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 12 18:14:04.122155 kernel: vgaarb: loaded Dec 12 18:14:04.122166 kernel: clocksource: Switched to clocksource kvm-clock Dec 12 18:14:04.122175 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 18:14:04.122227 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 18:14:04.122236 kernel: pnp: PnP ACPI init Dec 12 18:14:04.122354 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Dec 12 18:14:04.122368 kernel: pnp: PnP ACPI: found 5 devices Dec 12 18:14:04.122380 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 18:14:04.122392 kernel: NET: Registered PF_INET protocol family Dec 12 18:14:04.122401 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 18:14:04.122409 kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Dec 12 18:14:04.122418 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 18:14:04.122427 kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 12 18:14:04.122435 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 12 18:14:04.122444 kernel: TCP: Hash tables configured (established 131072 bind 65536) Dec 12 18:14:04.122455 kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Dec 12 18:14:04.122464 kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Dec 12 18:14:04.122472 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 18:14:04.122481 kernel: NET: Registered PF_XDP protocol family Dec 12 18:14:04.122586 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Dec 12 18:14:04.122684 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 12 18:14:04.122789 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 12 18:14:04.122889 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 12 18:14:04.122988 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 12 18:14:04.123090 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 12 18:14:04.123198 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 12 18:14:04.123298 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 12 18:14:04.123398 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Dec 12 18:14:04.123499 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Dec 12 18:14:04.123601 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Dec 12 18:14:04.123701 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Dec 12 18:14:04.123800 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Dec 12 18:14:04.123901 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Dec 12 18:14:04.124002 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Dec 12 18:14:04.124105 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Dec 12 18:14:04.124220 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Dec 12 18:14:04.124320 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Dec 12 18:14:04.124418 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Dec 12 18:14:04.124516 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Dec 12 18:14:04.124613 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Dec 12 18:14:04.124715 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Dec 12 18:14:04.124813 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Dec 12 18:14:04.124910 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Dec 12 18:14:04.125007 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Dec 12 18:14:04.125105 kernel: pci 0000:00:05.1: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Dec 12 18:14:04.125210 kernel: pci 0000:00:05.2: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Dec 12 18:14:04.125308 kernel: pci 0000:00:05.3: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Dec 12 18:14:04.125409 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Dec 12 18:14:04.125505 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff]: assigned Dec 12 18:14:04.125601 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff]: assigned Dec 12 18:14:04.125696 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff]: assigned Dec 12 18:14:04.125793 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff]: assigned Dec 12 18:14:04.125890 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff]: assigned Dec 12 18:14:04.125989 kernel: pci 0000:00:02.6: bridge window [io 0x8000-0x8fff]: assigned Dec 12 18:14:04.126085 kernel: pci 0000:00:02.7: bridge window [io 0x9000-0x9fff]: assigned Dec 12 18:14:04.126180 kernel: pci 0000:00:03.0: bridge window [io 0xa000-0xafff]: assigned Dec 12 18:14:04.126284 kernel: pci 0000:00:03.1: bridge window [io 0xb000-0xbfff]: assigned Dec 12 18:14:04.126380 kernel: pci 0000:00:03.2: bridge window [io 0xc000-0xcfff]: assigned Dec 12 18:14:04.126476 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff]: assigned Dec 12 18:14:04.126570 kernel: pci 0000:00:03.4: bridge window [io 0xe000-0xefff]: assigned Dec 12 18:14:04.126670 kernel: pci 0000:00:03.5: bridge window [io 0xf000-0xffff]: assigned Dec 12 18:14:04.126765 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.126861 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.126958 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.127056 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.127151 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.127263 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.127358 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.127453 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.127547 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.127642 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.127736 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.127834 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.127929 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.128023 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.128118 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.128228 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.128323 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.128418 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.128516 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.128611 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.128706 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.128801 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.128896 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.128991 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.129089 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.129190 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.129286 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.129381 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.129476 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.129571 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.129670 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff]: assigned Dec 12 18:14:04.129764 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff]: assigned Dec 12 18:14:04.129859 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff]: assigned Dec 12 18:14:04.129953 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff]: assigned Dec 12 18:14:04.130047 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff]: assigned Dec 12 18:14:04.130142 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff]: assigned Dec 12 18:14:04.130244 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff]: assigned Dec 12 18:14:04.130343 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff]: assigned Dec 12 18:14:04.130438 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff]: assigned Dec 12 18:14:04.130534 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff]: assigned Dec 12 18:14:04.130629 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff]: assigned Dec 12 18:14:04.130724 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff]: assigned Dec 12 18:14:04.130818 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff]: assigned Dec 12 18:14:04.130916 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.131012 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.131107 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.131218 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.131324 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.131420 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.131516 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.131616 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.131711 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.131807 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.131901 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.131996 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.132091 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.132205 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.132311 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.132407 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.132503 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.132598 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.132694 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.132788 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.132888 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.132984 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.133080 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.133175 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.133281 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.133376 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.133475 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.133572 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.133667 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Dec 12 18:14:04.133761 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Dec 12 18:14:04.133863 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 12 18:14:04.133962 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Dec 12 18:14:04.134059 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Dec 12 18:14:04.134159 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Dec 12 18:14:04.134264 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 12 18:14:04.134360 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Dec 12 18:14:04.134455 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Dec 12 18:14:04.134550 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Dec 12 18:14:04.134651 kernel: pci 0000:03:00.0: ROM [mem 0x83e80000-0x83efffff pref]: assigned Dec 12 18:14:04.134747 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 12 18:14:04.134845 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Dec 12 18:14:04.134939 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Dec 12 18:14:04.135034 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 12 18:14:04.135130 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Dec 12 18:14:04.135231 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Dec 12 18:14:04.135326 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 12 18:14:04.135421 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Dec 12 18:14:04.135515 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Dec 12 18:14:04.135612 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 12 18:14:04.135708 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Dec 12 18:14:04.135803 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Dec 12 18:14:04.135899 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 12 18:14:04.135993 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Dec 12 18:14:04.136087 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Dec 12 18:14:04.136197 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 12 18:14:04.136296 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Dec 12 18:14:04.136392 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Dec 12 18:14:04.136487 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 12 18:14:04.136584 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Dec 12 18:14:04.136679 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Dec 12 18:14:04.136774 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Dec 12 18:14:04.136872 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Dec 12 18:14:04.136968 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Dec 12 18:14:04.137064 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Dec 12 18:14:04.137160 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Dec 12 18:14:04.137269 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Dec 12 18:14:04.137365 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Dec 12 18:14:04.137460 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Dec 12 18:14:04.137558 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Dec 12 18:14:04.137654 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Dec 12 18:14:04.137749 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Dec 12 18:14:04.137844 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Dec 12 18:14:04.137939 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Dec 12 18:14:04.138033 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Dec 12 18:14:04.138129 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Dec 12 18:14:04.138236 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Dec 12 18:14:04.138335 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Dec 12 18:14:04.138431 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Dec 12 18:14:04.138527 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Dec 12 18:14:04.138620 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Dec 12 18:14:04.138715 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Dec 12 18:14:04.138813 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Dec 12 18:14:04.138909 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Dec 12 18:14:04.139007 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Dec 12 18:14:04.139103 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Dec 12 18:14:04.139205 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff] Dec 12 18:14:04.139301 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Dec 12 18:14:04.139396 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Dec 12 18:14:04.139496 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Dec 12 18:14:04.139591 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff] Dec 12 18:14:04.139686 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Dec 12 18:14:04.139780 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Dec 12 18:14:04.139875 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Dec 12 18:14:04.139971 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff] Dec 12 18:14:04.140066 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Dec 12 18:14:04.140163 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Dec 12 18:14:04.140283 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Dec 12 18:14:04.140379 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff] Dec 12 18:14:04.140475 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Dec 12 18:14:04.140570 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Dec 12 18:14:04.140667 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Dec 12 18:14:04.140767 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff] Dec 12 18:14:04.140863 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Dec 12 18:14:04.140959 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Dec 12 18:14:04.141056 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Dec 12 18:14:04.141154 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff] Dec 12 18:14:04.141262 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Dec 12 18:14:04.141360 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Dec 12 18:14:04.141459 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Dec 12 18:14:04.141554 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff] Dec 12 18:14:04.141650 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Dec 12 18:14:04.141746 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Dec 12 18:14:04.141844 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Dec 12 18:14:04.141943 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff] Dec 12 18:14:04.142039 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Dec 12 18:14:04.142134 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Dec 12 18:14:04.142242 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Dec 12 18:14:04.142339 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff] Dec 12 18:14:04.142435 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Dec 12 18:14:04.142535 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Dec 12 18:14:04.142634 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Dec 12 18:14:04.142729 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff] Dec 12 18:14:04.142825 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Dec 12 18:14:04.142922 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Dec 12 18:14:04.143021 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Dec 12 18:14:04.143122 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff] Dec 12 18:14:04.143227 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Dec 12 18:14:04.143325 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Dec 12 18:14:04.143425 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Dec 12 18:14:04.143523 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff] Dec 12 18:14:04.143619 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Dec 12 18:14:04.143715 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Dec 12 18:14:04.143816 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Dec 12 18:14:04.143914 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff] Dec 12 18:14:04.144013 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Dec 12 18:14:04.144110 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Dec 12 18:14:04.144236 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 12 18:14:04.144332 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 12 18:14:04.144420 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 12 18:14:04.144508 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Dec 12 18:14:04.144596 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Dec 12 18:14:04.144684 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x38e800003fff window] Dec 12 18:14:04.144787 kernel: pci_bus 0000:01: resource 0 [io 0x6000-0x6fff] Dec 12 18:14:04.144884 kernel: pci_bus 0000:01: resource 1 [mem 0x84000000-0x842fffff] Dec 12 18:14:04.144976 kernel: pci_bus 0000:01: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Dec 12 18:14:04.145076 kernel: pci_bus 0000:02: resource 0 [io 0x6000-0x6fff] Dec 12 18:14:04.145171 kernel: pci_bus 0000:02: resource 1 [mem 0x84000000-0x841fffff] Dec 12 18:14:04.145272 kernel: pci_bus 0000:02: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Dec 12 18:14:04.145370 kernel: pci_bus 0000:03: resource 1 [mem 0x83e00000-0x83ffffff] Dec 12 18:14:04.145464 kernel: pci_bus 0000:03: resource 2 [mem 0x380800000000-0x380fffffffff 64bit pref] Dec 12 18:14:04.145562 kernel: pci_bus 0000:04: resource 1 [mem 0x83c00000-0x83dfffff] Dec 12 18:14:04.145653 kernel: pci_bus 0000:04: resource 2 [mem 0x381000000000-0x3817ffffffff 64bit pref] Dec 12 18:14:04.145755 kernel: pci_bus 0000:05: resource 1 [mem 0x83a00000-0x83bfffff] Dec 12 18:14:04.145847 kernel: pci_bus 0000:05: resource 2 [mem 0x381800000000-0x381fffffffff 64bit pref] Dec 12 18:14:04.145950 kernel: pci_bus 0000:06: resource 1 [mem 0x83800000-0x839fffff] Dec 12 18:14:04.146041 kernel: pci_bus 0000:06: resource 2 [mem 0x382000000000-0x3827ffffffff 64bit pref] Dec 12 18:14:04.146136 kernel: pci_bus 0000:07: resource 1 [mem 0x83600000-0x837fffff] Dec 12 18:14:04.146235 kernel: pci_bus 0000:07: resource 2 [mem 0x382800000000-0x382fffffffff 64bit pref] Dec 12 18:14:04.146334 kernel: pci_bus 0000:08: resource 1 [mem 0x83400000-0x835fffff] Dec 12 18:14:04.146428 kernel: pci_bus 0000:08: resource 2 [mem 0x383000000000-0x3837ffffffff 64bit pref] Dec 12 18:14:04.146524 kernel: pci_bus 0000:09: resource 1 [mem 0x83200000-0x833fffff] Dec 12 18:14:04.146615 kernel: pci_bus 0000:09: resource 2 [mem 0x383800000000-0x383fffffffff 64bit pref] Dec 12 18:14:04.146710 kernel: pci_bus 0000:0a: resource 1 [mem 0x83000000-0x831fffff] Dec 12 18:14:04.146800 kernel: pci_bus 0000:0a: resource 2 [mem 0x384000000000-0x3847ffffffff 64bit pref] Dec 12 18:14:04.146895 kernel: pci_bus 0000:0b: resource 1 [mem 0x82e00000-0x82ffffff] Dec 12 18:14:04.146987 kernel: pci_bus 0000:0b: resource 2 [mem 0x384800000000-0x384fffffffff 64bit pref] Dec 12 18:14:04.147084 kernel: pci_bus 0000:0c: resource 1 [mem 0x82c00000-0x82dfffff] Dec 12 18:14:04.147174 kernel: pci_bus 0000:0c: resource 2 [mem 0x385000000000-0x3857ffffffff 64bit pref] Dec 12 18:14:04.147277 kernel: pci_bus 0000:0d: resource 1 [mem 0x82a00000-0x82bfffff] Dec 12 18:14:04.147370 kernel: pci_bus 0000:0d: resource 2 [mem 0x385800000000-0x385fffffffff 64bit pref] Dec 12 18:14:04.147470 kernel: pci_bus 0000:0e: resource 1 [mem 0x82800000-0x829fffff] Dec 12 18:14:04.147559 kernel: pci_bus 0000:0e: resource 2 [mem 0x386000000000-0x3867ffffffff 64bit pref] Dec 12 18:14:04.147656 kernel: pci_bus 0000:0f: resource 1 [mem 0x82600000-0x827fffff] Dec 12 18:14:04.147746 kernel: pci_bus 0000:0f: resource 2 [mem 0x386800000000-0x386fffffffff 64bit pref] Dec 12 18:14:04.147847 kernel: pci_bus 0000:10: resource 1 [mem 0x82400000-0x825fffff] Dec 12 18:14:04.147938 kernel: pci_bus 0000:10: resource 2 [mem 0x387000000000-0x3877ffffffff 64bit pref] Dec 12 18:14:04.148037 kernel: pci_bus 0000:11: resource 1 [mem 0x82200000-0x823fffff] Dec 12 18:14:04.148128 kernel: pci_bus 0000:11: resource 2 [mem 0x387800000000-0x387fffffffff 64bit pref] Dec 12 18:14:04.148246 kernel: pci_bus 0000:12: resource 0 [io 0xf000-0xffff] Dec 12 18:14:04.148338 kernel: pci_bus 0000:12: resource 1 [mem 0x82000000-0x821fffff] Dec 12 18:14:04.148433 kernel: pci_bus 0000:12: resource 2 [mem 0x388000000000-0x3887ffffffff 64bit pref] Dec 12 18:14:04.148530 kernel: pci_bus 0000:13: resource 0 [io 0xe000-0xefff] Dec 12 18:14:04.148622 kernel: pci_bus 0000:13: resource 1 [mem 0x81e00000-0x81ffffff] Dec 12 18:14:04.148710 kernel: pci_bus 0000:13: resource 2 [mem 0x388800000000-0x388fffffffff 64bit pref] Dec 12 18:14:04.148806 kernel: pci_bus 0000:14: resource 0 [io 0xd000-0xdfff] Dec 12 18:14:04.148898 kernel: pci_bus 0000:14: resource 1 [mem 0x81c00000-0x81dfffff] Dec 12 18:14:04.148990 kernel: pci_bus 0000:14: resource 2 [mem 0x389000000000-0x3897ffffffff 64bit pref] Dec 12 18:14:04.149086 kernel: pci_bus 0000:15: resource 0 [io 0xc000-0xcfff] Dec 12 18:14:04.149176 kernel: pci_bus 0000:15: resource 1 [mem 0x81a00000-0x81bfffff] Dec 12 18:14:04.149281 kernel: pci_bus 0000:15: resource 2 [mem 0x389800000000-0x389fffffffff 64bit pref] Dec 12 18:14:04.149376 kernel: pci_bus 0000:16: resource 0 [io 0xb000-0xbfff] Dec 12 18:14:04.149469 kernel: pci_bus 0000:16: resource 1 [mem 0x81800000-0x819fffff] Dec 12 18:14:04.149558 kernel: pci_bus 0000:16: resource 2 [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Dec 12 18:14:04.149656 kernel: pci_bus 0000:17: resource 0 [io 0xa000-0xafff] Dec 12 18:14:04.149746 kernel: pci_bus 0000:17: resource 1 [mem 0x81600000-0x817fffff] Dec 12 18:14:04.149835 kernel: pci_bus 0000:17: resource 2 [mem 0x38a800000000-0x38afffffffff 64bit pref] Dec 12 18:14:04.149932 kernel: pci_bus 0000:18: resource 0 [io 0x9000-0x9fff] Dec 12 18:14:04.150024 kernel: pci_bus 0000:18: resource 1 [mem 0x81400000-0x815fffff] Dec 12 18:14:04.150113 kernel: pci_bus 0000:18: resource 2 [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Dec 12 18:14:04.150216 kernel: pci_bus 0000:19: resource 0 [io 0x8000-0x8fff] Dec 12 18:14:04.150307 kernel: pci_bus 0000:19: resource 1 [mem 0x81200000-0x813fffff] Dec 12 18:14:04.150395 kernel: pci_bus 0000:19: resource 2 [mem 0x38b800000000-0x38bfffffffff 64bit pref] Dec 12 18:14:04.150493 kernel: pci_bus 0000:1a: resource 0 [io 0x5000-0x5fff] Dec 12 18:14:04.150583 kernel: pci_bus 0000:1a: resource 1 [mem 0x81000000-0x811fffff] Dec 12 18:14:04.150671 kernel: pci_bus 0000:1a: resource 2 [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Dec 12 18:14:04.150766 kernel: pci_bus 0000:1b: resource 0 [io 0x4000-0x4fff] Dec 12 18:14:04.150856 kernel: pci_bus 0000:1b: resource 1 [mem 0x80e00000-0x80ffffff] Dec 12 18:14:04.150944 kernel: pci_bus 0000:1b: resource 2 [mem 0x38c800000000-0x38cfffffffff 64bit pref] Dec 12 18:14:04.151041 kernel: pci_bus 0000:1c: resource 0 [io 0x3000-0x3fff] Dec 12 18:14:04.151130 kernel: pci_bus 0000:1c: resource 1 [mem 0x80c00000-0x80dfffff] Dec 12 18:14:04.151226 kernel: pci_bus 0000:1c: resource 2 [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Dec 12 18:14:04.151322 kernel: pci_bus 0000:1d: resource 0 [io 0x2000-0x2fff] Dec 12 18:14:04.151411 kernel: pci_bus 0000:1d: resource 1 [mem 0x80a00000-0x80bfffff] Dec 12 18:14:04.151503 kernel: pci_bus 0000:1d: resource 2 [mem 0x38d800000000-0x38dfffffffff 64bit pref] Dec 12 18:14:04.151598 kernel: pci_bus 0000:1e: resource 0 [io 0x1000-0x1fff] Dec 12 18:14:04.151688 kernel: pci_bus 0000:1e: resource 1 [mem 0x80800000-0x809fffff] Dec 12 18:14:04.151776 kernel: pci_bus 0000:1e: resource 2 [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Dec 12 18:14:04.151788 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 12 18:14:04.151797 kernel: PCI: CLS 0 bytes, default 64 Dec 12 18:14:04.151805 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 12 18:14:04.151816 kernel: software IO TLB: mapped [mem 0x0000000077e7e000-0x000000007be7e000] (64MB) Dec 12 18:14:04.151825 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 12 18:14:04.151834 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Dec 12 18:14:04.151842 kernel: Initialise system trusted keyrings Dec 12 18:14:04.151851 kernel: workingset: timestamp_bits=39 max_order=22 bucket_order=0 Dec 12 18:14:04.151860 kernel: Key type asymmetric registered Dec 12 18:14:04.151870 kernel: Asymmetric key parser 'x509' registered Dec 12 18:14:04.151878 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 18:14:04.151887 kernel: io scheduler mq-deadline registered Dec 12 18:14:04.151896 kernel: io scheduler kyber registered Dec 12 18:14:04.151904 kernel: io scheduler bfq registered Dec 12 18:14:04.152005 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 12 18:14:04.152103 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 12 18:14:04.152222 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 12 18:14:04.152320 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 12 18:14:04.152421 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 12 18:14:04.152517 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 12 18:14:04.152616 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 12 18:14:04.152715 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 12 18:14:04.152812 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 12 18:14:04.152910 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 12 18:14:04.153007 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 12 18:14:04.153102 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 12 18:14:04.153209 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 12 18:14:04.153306 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 12 18:14:04.153404 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 12 18:14:04.153501 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 12 18:14:04.153512 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 12 18:14:04.153610 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Dec 12 18:14:04.153706 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Dec 12 18:14:04.153803 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33 Dec 12 18:14:04.153899 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33 Dec 12 18:14:04.153997 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34 Dec 12 18:14:04.154098 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34 Dec 12 18:14:04.154204 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35 Dec 12 18:14:04.154301 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35 Dec 12 18:14:04.154398 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36 Dec 12 18:14:04.154493 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36 Dec 12 18:14:04.154593 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37 Dec 12 18:14:04.154688 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37 Dec 12 18:14:04.154785 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38 Dec 12 18:14:04.154881 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38 Dec 12 18:14:04.154977 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39 Dec 12 18:14:04.155075 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39 Dec 12 18:14:04.155086 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 12 18:14:04.155180 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40 Dec 12 18:14:04.155282 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40 Dec 12 18:14:04.155379 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 41 Dec 12 18:14:04.155474 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 41 Dec 12 18:14:04.155571 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 42 Dec 12 18:14:04.155668 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 42 Dec 12 18:14:04.155764 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 43 Dec 12 18:14:04.155860 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 43 Dec 12 18:14:04.155957 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 44 Dec 12 18:14:04.156052 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 44 Dec 12 18:14:04.156148 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 45 Dec 12 18:14:04.156263 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 45 Dec 12 18:14:04.156365 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 46 Dec 12 18:14:04.156465 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 46 Dec 12 18:14:04.156562 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 47 Dec 12 18:14:04.156657 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 47 Dec 12 18:14:04.156668 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Dec 12 18:14:04.156762 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 48 Dec 12 18:14:04.156859 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 48 Dec 12 18:14:04.156955 kernel: pcieport 0000:00:05.1: PME: Signaling with IRQ 49 Dec 12 18:14:04.157051 kernel: pcieport 0000:00:05.1: AER: enabled with IRQ 49 Dec 12 18:14:04.157148 kernel: pcieport 0000:00:05.2: PME: Signaling with IRQ 50 Dec 12 18:14:04.157257 kernel: pcieport 0000:00:05.2: AER: enabled with IRQ 50 Dec 12 18:14:04.157354 kernel: pcieport 0000:00:05.3: PME: Signaling with IRQ 51 Dec 12 18:14:04.157451 kernel: pcieport 0000:00:05.3: AER: enabled with IRQ 51 Dec 12 18:14:04.157549 kernel: pcieport 0000:00:05.4: PME: Signaling with IRQ 52 Dec 12 18:14:04.157645 kernel: pcieport 0000:00:05.4: AER: enabled with IRQ 52 Dec 12 18:14:04.157656 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 18:14:04.157665 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 18:14:04.157674 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:14:04.157682 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 12 18:14:04.157691 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 12 18:14:04.157702 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 12 18:14:04.157812 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 12 18:14:04.157905 kernel: rtc_cmos 00:03: registered as rtc0 Dec 12 18:14:04.157997 kernel: rtc_cmos 00:03: setting system clock to 2025-12-12T18:14:02 UTC (1765563242) Dec 12 18:14:04.158090 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 12 18:14:04.158101 kernel: intel_pstate: CPU model not supported Dec 12 18:14:04.158112 kernel: efifb: probing for efifb Dec 12 18:14:04.158121 kernel: efifb: framebuffer at 0x80000000, using 4000k, total 4000k Dec 12 18:14:04.158130 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 12 18:14:04.158138 kernel: efifb: scrolling: redraw Dec 12 18:14:04.158147 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 12 18:14:04.158155 kernel: Console: switching to colour frame buffer device 160x50 Dec 12 18:14:04.158164 kernel: fb0: EFI VGA frame buffer device Dec 12 18:14:04.158175 kernel: pstore: Using crash dump compression: deflate Dec 12 18:14:04.158190 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Dec 12 18:14:04.158200 kernel: pstore: Registered efi_pstore as persistent store backend Dec 12 18:14:04.158208 kernel: NET: Registered PF_INET6 protocol family Dec 12 18:14:04.158217 kernel: Segment Routing with IPv6 Dec 12 18:14:04.158225 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 18:14:04.158234 kernel: NET: Registered PF_PACKET protocol family Dec 12 18:14:04.158242 kernel: Key type dns_resolver registered Dec 12 18:14:04.158252 kernel: IPI shorthand broadcast: enabled Dec 12 18:14:04.158261 kernel: sched_clock: Marking stable (2823007792, 162976430)->(3223926316, -237942094) Dec 12 18:14:04.158269 kernel: registered taskstats version 1 Dec 12 18:14:04.158278 kernel: Loading compiled-in X.509 certificates Dec 12 18:14:04.158287 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: b90706f42f055ab9f35fc8fc29156d877adb12c4' Dec 12 18:14:04.158295 kernel: Demotion targets for Node 0: null Dec 12 18:14:04.158304 kernel: Key type .fscrypt registered Dec 12 18:14:04.158314 kernel: Key type fscrypt-provisioning registered Dec 12 18:14:04.158322 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 18:14:04.158331 kernel: ima: Allocated hash algorithm: sha1 Dec 12 18:14:04.158339 kernel: ima: No architecture policies found Dec 12 18:14:04.158348 kernel: clk: Disabling unused clocks Dec 12 18:14:04.158356 kernel: Freeing unused kernel image (initmem) memory: 15464K Dec 12 18:14:04.158365 kernel: Write protecting the kernel read-only data: 45056k Dec 12 18:14:04.158375 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Dec 12 18:14:04.158384 kernel: Run /init as init process Dec 12 18:14:04.158392 kernel: with arguments: Dec 12 18:14:04.158401 kernel: /init Dec 12 18:14:04.158410 kernel: with environment: Dec 12 18:14:04.158418 kernel: HOME=/ Dec 12 18:14:04.158427 kernel: TERM=linux Dec 12 18:14:04.158435 kernel: SCSI subsystem initialized Dec 12 18:14:04.158445 kernel: libata version 3.00 loaded. Dec 12 18:14:04.158553 kernel: ahci 0000:00:1f.2: version 3.0 Dec 12 18:14:04.158564 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 12 18:14:04.158662 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 12 18:14:04.158758 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 12 18:14:04.158854 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 12 18:14:04.158968 kernel: scsi host0: ahci Dec 12 18:14:04.159073 kernel: scsi host1: ahci Dec 12 18:14:04.159198 kernel: scsi host2: ahci Dec 12 18:14:04.159301 kernel: scsi host3: ahci Dec 12 18:14:04.159403 kernel: scsi host4: ahci Dec 12 18:14:04.159513 kernel: scsi host5: ahci Dec 12 18:14:04.159525 kernel: ata1: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380100 irq 55 lpm-pol 1 Dec 12 18:14:04.159534 kernel: ata2: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380180 irq 55 lpm-pol 1 Dec 12 18:14:04.159543 kernel: ata3: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380200 irq 55 lpm-pol 1 Dec 12 18:14:04.159552 kernel: ata4: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380280 irq 55 lpm-pol 1 Dec 12 18:14:04.159561 kernel: ata5: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380300 irq 55 lpm-pol 1 Dec 12 18:14:04.159570 kernel: ata6: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380380 irq 55 lpm-pol 1 Dec 12 18:14:04.159581 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 12 18:14:04.159590 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 12 18:14:04.159599 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 12 18:14:04.159607 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 12 18:14:04.159616 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 12 18:14:04.159625 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 12 18:14:04.159634 kernel: ACPI: bus type USB registered Dec 12 18:14:04.159645 kernel: usbcore: registered new interface driver usbfs Dec 12 18:14:04.159653 kernel: usbcore: registered new interface driver hub Dec 12 18:14:04.159662 kernel: usbcore: registered new device driver usb Dec 12 18:14:04.159774 kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller Dec 12 18:14:04.159878 kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1 Dec 12 18:14:04.159982 kernel: uhci_hcd 0000:02:01.0: detected 2 ports Dec 12 18:14:04.160084 kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x00006000 Dec 12 18:14:04.160246 kernel: hub 1-0:1.0: USB hub found Dec 12 18:14:04.160359 kernel: hub 1-0:1.0: 2 ports detected Dec 12 18:14:04.160471 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues Dec 12 18:14:04.160572 kernel: virtio_blk virtio2: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Dec 12 18:14:04.160583 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 18:14:04.160596 kernel: GPT:25804799 != 104857599 Dec 12 18:14:04.160605 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 18:14:04.160613 kernel: GPT:25804799 != 104857599 Dec 12 18:14:04.160622 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 18:14:04.160630 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:14:04.160640 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 18:14:04.160648 kernel: device-mapper: uevent: version 1.0.3 Dec 12 18:14:04.160659 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 18:14:04.160668 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 12 18:14:04.160677 kernel: raid6: avx512x4 gen() 42544 MB/s Dec 12 18:14:04.160686 kernel: raid6: avx512x2 gen() 46951 MB/s Dec 12 18:14:04.160694 kernel: raid6: avx512x1 gen() 44390 MB/s Dec 12 18:14:04.160703 kernel: raid6: avx2x4 gen() 34907 MB/s Dec 12 18:14:04.160711 kernel: raid6: avx2x2 gen() 34371 MB/s Dec 12 18:14:04.160722 kernel: raid6: avx2x1 gen() 30432 MB/s Dec 12 18:14:04.160730 kernel: raid6: using algorithm avx512x2 gen() 46951 MB/s Dec 12 18:14:04.160739 kernel: raid6: .... xor() 26795 MB/s, rmw enabled Dec 12 18:14:04.160750 kernel: raid6: using avx512x2 recovery algorithm Dec 12 18:14:04.160758 kernel: xor: automatically using best checksumming function avx Dec 12 18:14:04.160883 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Dec 12 18:14:04.160899 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 18:14:04.160908 kernel: BTRFS: device fsid ea73a94a-fb20-4d45-8448-4c6f4c422a4f devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (261) Dec 12 18:14:04.160917 kernel: BTRFS info (device dm-0): first mount of filesystem ea73a94a-fb20-4d45-8448-4c6f4c422a4f Dec 12 18:14:04.160926 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:14:04.160935 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 18:14:04.160943 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 18:14:04.160955 kernel: loop: module loaded Dec 12 18:14:04.160964 kernel: loop0: detected capacity change from 0 to 100136 Dec 12 18:14:04.160973 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 18:14:04.160981 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 18:14:04.160990 kernel: usbcore: registered new interface driver usbhid Dec 12 18:14:04.160999 kernel: usbhid: USB HID core driver Dec 12 18:14:04.161010 systemd[1]: Successfully made /usr/ read-only. Dec 12 18:14:04.161025 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:14:04.161035 systemd[1]: Detected virtualization kvm. Dec 12 18:14:04.161044 systemd[1]: Detected architecture x86-64. Dec 12 18:14:04.161053 systemd[1]: Running in initrd. Dec 12 18:14:04.161062 systemd[1]: No hostname configured, using default hostname. Dec 12 18:14:04.161072 systemd[1]: Hostname set to . Dec 12 18:14:04.161083 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 18:14:04.161092 systemd[1]: Queued start job for default target initrd.target. Dec 12 18:14:04.161101 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:14:04.161110 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:14:04.161119 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:14:04.161129 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 18:14:04.161138 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:14:04.161150 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 18:14:04.161159 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 18:14:04.161168 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:14:04.161178 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:14:04.161193 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:14:04.161205 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:14:04.161214 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:14:04.161223 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:14:04.161232 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:14:04.161242 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:14:04.161251 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:14:04.161260 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 18:14:04.161271 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 18:14:04.161280 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 18:14:04.161289 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:14:04.161298 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:14:04.161308 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:14:04.161317 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:14:04.161326 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 18:14:04.161337 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 18:14:04.161346 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:14:04.161355 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 18:14:04.161365 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 18:14:04.161374 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 18:14:04.161383 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:14:04.161394 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:14:04.161404 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:14:04.161413 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 18:14:04.161422 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:14:04.161434 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 18:14:04.161443 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:14:04.161477 systemd-journald[409]: Collecting audit messages is enabled. Dec 12 18:14:04.161504 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:14:04.161514 kernel: audit: type=1130 audit(1765563244.097:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.161524 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:14:04.161533 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 18:14:04.161542 kernel: Bridge firewalling registered Dec 12 18:14:04.161553 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:14:04.161562 kernel: audit: type=1130 audit(1765563244.120:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.161573 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:14:04.161583 kernel: audit: type=1130 audit(1765563244.126:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.161592 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:14:04.161601 kernel: audit: type=1130 audit(1765563244.132:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.161610 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 18:14:04.161619 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:14:04.161632 systemd-journald[409]: Journal started Dec 12 18:14:04.161653 systemd-journald[409]: Runtime Journal (/run/log/journal/84006ba051b04d1e9e24bfce20c48697) is 8M, max 319.5M, 311.5M free. Dec 12 18:14:04.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.116087 systemd-modules-load[412]: Inserted module 'br_netfilter' Dec 12 18:14:04.164240 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:14:04.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.167826 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:14:04.169387 kernel: audit: type=1130 audit(1765563244.164:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.171160 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:14:04.177218 kernel: audit: type=1130 audit(1765563244.171:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.177248 kernel: audit: type=1334 audit(1765563244.171:8): prog-id=6 op=LOAD Dec 12 18:14:04.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.171000 audit: BPF prog-id=6 op=LOAD Dec 12 18:14:04.172931 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:14:04.175848 systemd-tmpfiles[439]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 18:14:04.182909 kernel: audit: type=1130 audit(1765563244.178:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.177962 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:14:04.179757 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 18:14:04.188771 kernel: audit: type=1130 audit(1765563244.184:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.183805 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:14:04.196627 dracut-cmdline[452]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 12 18:14:04.225333 systemd-resolved[449]: Positive Trust Anchors: Dec 12 18:14:04.225348 systemd-resolved[449]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:14:04.225352 systemd-resolved[449]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 18:14:04.225383 systemd-resolved[449]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:14:04.247579 systemd-resolved[449]: Defaulting to hostname 'linux'. Dec 12 18:14:04.248507 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:14:04.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.249403 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:14:04.304240 kernel: Loading iSCSI transport class v2.0-870. Dec 12 18:14:04.322220 kernel: iscsi: registered transport (tcp) Dec 12 18:14:04.349510 kernel: iscsi: registered transport (qla4xxx) Dec 12 18:14:04.349610 kernel: QLogic iSCSI HBA Driver Dec 12 18:14:04.377243 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:14:04.401800 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:14:04.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.403817 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:14:04.449826 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 18:14:04.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.451756 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 18:14:04.452978 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 18:14:04.500038 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:14:04.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.501000 audit: BPF prog-id=7 op=LOAD Dec 12 18:14:04.501000 audit: BPF prog-id=8 op=LOAD Dec 12 18:14:04.502059 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:14:04.538331 systemd-udevd[694]: Using default interface naming scheme 'v257'. Dec 12 18:14:04.547481 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:14:04.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.549793 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 18:14:04.568574 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:14:04.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.570000 audit: BPF prog-id=9 op=LOAD Dec 12 18:14:04.570924 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:14:04.579876 dracut-pre-trigger[771]: rd.md=0: removing MD RAID activation Dec 12 18:14:04.605541 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:14:04.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.607334 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:14:04.608711 systemd-networkd[793]: lo: Link UP Dec 12 18:14:04.608718 systemd-networkd[793]: lo: Gained carrier Dec 12 18:14:04.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.609130 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:14:04.609856 systemd[1]: Reached target network.target - Network. Dec 12 18:14:04.718665 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:14:04.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.724501 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 18:14:04.772869 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 18:14:04.786207 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Dec 12 18:14:04.793224 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0 Dec 12 18:14:04.796260 kernel: AES CTR mode by8 optimization enabled Dec 12 18:14:04.801212 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 12 18:14:04.808056 systemd-networkd[793]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 18:14:04.808066 systemd-networkd[793]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:14:04.808483 systemd-networkd[793]: eth0: Link UP Dec 12 18:14:04.809263 systemd-networkd[793]: eth0: Gained carrier Dec 12 18:14:04.809280 systemd-networkd[793]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 18:14:04.812539 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 18:14:04.823751 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 18:14:04.839684 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 18:14:04.846566 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 18:14:04.849225 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 18:14:04.849741 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:14:04.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.849854 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:14:04.850660 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:14:04.852176 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:14:04.868428 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 18:14:04.874391 kernel: kauditd_printk_skb: 13 callbacks suppressed Dec 12 18:14:04.874422 kernel: audit: type=1130 audit(1765563244.868:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.869775 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:14:04.874872 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:14:04.875788 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:14:04.877011 disk-uuid[986]: Primary Header is updated. Dec 12 18:14:04.877011 disk-uuid[986]: Secondary Entries is updated. Dec 12 18:14:04.877011 disk-uuid[986]: Secondary Header is updated. Dec 12 18:14:04.877626 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 18:14:04.896530 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:14:04.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.903206 kernel: audit: type=1130 audit(1765563244.897:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.904351 systemd-networkd[793]: eth0: DHCPv4 address 10.0.8.19/25, gateway 10.0.8.1 acquired from 10.0.8.1 Dec 12 18:14:04.914106 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:14:04.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:04.919198 kernel: audit: type=1130 audit(1765563244.914:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:05.923982 disk-uuid[988]: Warning: The kernel is still using the old partition table. Dec 12 18:14:05.923982 disk-uuid[988]: The new table will be used at the next reboot or after you Dec 12 18:14:05.923982 disk-uuid[988]: run partprobe(8) or kpartx(8) Dec 12 18:14:05.923982 disk-uuid[988]: The operation has completed successfully. Dec 12 18:14:05.933933 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 18:14:05.943570 kernel: audit: type=1130 audit(1765563245.934:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:05.943604 kernel: audit: type=1131 audit(1765563245.934:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:05.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:05.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:05.934077 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 18:14:05.936497 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 18:14:05.992238 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1010) Dec 12 18:14:05.995722 kernel: BTRFS info (device vda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:14:05.995765 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:14:06.005720 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:14:06.005787 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:14:06.014243 kernel: BTRFS info (device vda6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:14:06.015362 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 18:14:06.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:06.018034 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 18:14:06.020893 kernel: audit: type=1130 audit(1765563246.015:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:06.220141 ignition[1029]: Ignition 2.22.0 Dec 12 18:14:06.220157 ignition[1029]: Stage: fetch-offline Dec 12 18:14:06.220222 ignition[1029]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:14:06.220232 ignition[1029]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 18:14:06.220332 ignition[1029]: parsed url from cmdline: "" Dec 12 18:14:06.222356 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:14:06.227242 kernel: audit: type=1130 audit(1765563246.223:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:06.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:06.220336 ignition[1029]: no config URL provided Dec 12 18:14:06.224329 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 18:14:06.220342 ignition[1029]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:14:06.220352 ignition[1029]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:14:06.220357 ignition[1029]: failed to fetch config: resource requires networking Dec 12 18:14:06.220528 ignition[1029]: Ignition finished successfully Dec 12 18:14:06.259347 systemd-networkd[793]: eth0: Gained IPv6LL Dec 12 18:14:06.270833 ignition[1042]: Ignition 2.22.0 Dec 12 18:14:06.270847 ignition[1042]: Stage: fetch Dec 12 18:14:06.271000 ignition[1042]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:14:06.271011 ignition[1042]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 18:14:06.271103 ignition[1042]: parsed url from cmdline: "" Dec 12 18:14:06.271107 ignition[1042]: no config URL provided Dec 12 18:14:06.271112 ignition[1042]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:14:06.271119 ignition[1042]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:14:06.271276 ignition[1042]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 12 18:14:06.271347 ignition[1042]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 12 18:14:06.271381 ignition[1042]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 12 18:14:06.582269 ignition[1042]: GET result: OK Dec 12 18:14:06.582502 ignition[1042]: parsing config with SHA512: e3ad27ad3e1f3541209368d617d356443db4c1223bd600b755a497e2f69728bb0e0d7a6a815be81b8205e0f4f1d9cb92af593e0a804cf264c821e4f5247aa30a Dec 12 18:14:06.589895 unknown[1042]: fetched base config from "system" Dec 12 18:14:06.589911 unknown[1042]: fetched base config from "system" Dec 12 18:14:06.590430 ignition[1042]: fetch: fetch complete Dec 12 18:14:06.589932 unknown[1042]: fetched user config from "openstack" Dec 12 18:14:06.590437 ignition[1042]: fetch: fetch passed Dec 12 18:14:06.590489 ignition[1042]: Ignition finished successfully Dec 12 18:14:06.592894 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 18:14:06.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:06.594836 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 18:14:06.599588 kernel: audit: type=1130 audit(1765563246.593:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:06.636439 ignition[1054]: Ignition 2.22.0 Dec 12 18:14:06.636451 ignition[1054]: Stage: kargs Dec 12 18:14:06.636610 ignition[1054]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:14:06.636619 ignition[1054]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 18:14:06.637775 ignition[1054]: kargs: kargs passed Dec 12 18:14:06.637820 ignition[1054]: Ignition finished successfully Dec 12 18:14:06.639577 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 18:14:06.643787 kernel: audit: type=1130 audit(1765563246.639:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:06.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:06.641144 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 18:14:06.673349 ignition[1065]: Ignition 2.22.0 Dec 12 18:14:06.673361 ignition[1065]: Stage: disks Dec 12 18:14:06.673512 ignition[1065]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:14:06.673520 ignition[1065]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 18:14:06.674311 ignition[1065]: disks: disks passed Dec 12 18:14:06.674350 ignition[1065]: Ignition finished successfully Dec 12 18:14:06.675615 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 18:14:06.679649 kernel: audit: type=1130 audit(1765563246.676:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:06.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:06.676718 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 18:14:06.679981 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 18:14:06.680630 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:14:06.681237 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:14:06.681866 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:14:06.683499 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 18:14:06.756113 systemd-fsck[1078]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 12 18:14:06.760565 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 18:14:06.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:06.762056 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 18:14:06.971214 kernel: EXT4-fs (vda9): mounted filesystem 7cac6192-738c-43cc-9341-24f71d091e91 r/w with ordered data mode. Quota mode: none. Dec 12 18:14:06.972038 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 18:14:06.973418 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 18:14:06.977908 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:14:06.980213 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 18:14:06.981072 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 18:14:06.981931 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 12 18:14:06.982522 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 18:14:06.982555 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:14:07.005210 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 18:14:07.008254 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 18:14:07.021211 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1086) Dec 12 18:14:07.025866 kernel: BTRFS info (device vda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:14:07.025912 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:14:07.035917 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:14:07.035983 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:14:07.037220 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:14:07.083245 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 18:14:07.091423 initrd-setup-root[1114]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 18:14:07.096469 initrd-setup-root[1121]: cut: /sysroot/etc/group: No such file or directory Dec 12 18:14:07.101607 initrd-setup-root[1128]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 18:14:07.104984 initrd-setup-root[1135]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 18:14:07.214797 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 18:14:07.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:07.216964 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 18:14:07.218417 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 18:14:07.238733 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 18:14:07.242714 kernel: BTRFS info (device vda6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:14:07.266027 ignition[1203]: INFO : Ignition 2.22.0 Dec 12 18:14:07.266027 ignition[1203]: INFO : Stage: mount Dec 12 18:14:07.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:07.267912 ignition[1203]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:14:07.267912 ignition[1203]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 18:14:07.267912 ignition[1203]: INFO : mount: mount passed Dec 12 18:14:07.267912 ignition[1203]: INFO : Ignition finished successfully Dec 12 18:14:07.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:07.266255 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 18:14:07.268259 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 18:14:08.123259 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 18:14:10.131252 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 18:14:14.157253 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 18:14:14.161538 coreos-metadata[1088]: Dec 12 18:14:14.161 WARN failed to locate config-drive, using the metadata service API instead Dec 12 18:14:14.180863 coreos-metadata[1088]: Dec 12 18:14:14.180 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 12 18:14:14.328060 coreos-metadata[1088]: Dec 12 18:14:14.327 INFO Fetch successful Dec 12 18:14:14.328727 coreos-metadata[1088]: Dec 12 18:14:14.328 INFO wrote hostname ci-4515-1-0-e-14f87f00b0 to /sysroot/etc/hostname Dec 12 18:14:14.331234 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 12 18:14:14.331365 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 12 18:14:14.339939 kernel: kauditd_printk_skb: 4 callbacks suppressed Dec 12 18:14:14.339968 kernel: audit: type=1130 audit(1765563254.331:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:14.339982 kernel: audit: type=1131 audit(1765563254.332:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:14.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:14.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:14.333273 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 18:14:14.364750 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:14:14.401376 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1222) Dec 12 18:14:14.409649 kernel: BTRFS info (device vda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:14:14.409798 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:14:14.426409 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:14:14.426511 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:14:14.432899 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:14:14.497253 ignition[1240]: INFO : Ignition 2.22.0 Dec 12 18:14:14.497253 ignition[1240]: INFO : Stage: files Dec 12 18:14:14.498927 ignition[1240]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:14:14.498927 ignition[1240]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 18:14:14.500107 ignition[1240]: DEBUG : files: compiled without relabeling support, skipping Dec 12 18:14:14.501758 ignition[1240]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 18:14:14.501758 ignition[1240]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 18:14:14.509574 ignition[1240]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 18:14:14.510575 ignition[1240]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 18:14:14.511565 unknown[1240]: wrote ssh authorized keys file for user: core Dec 12 18:14:14.512737 ignition[1240]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 18:14:14.515124 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 18:14:14.516574 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 12 18:14:14.591104 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 18:14:14.968208 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 18:14:14.968208 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 18:14:14.969928 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 18:14:14.969928 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:14:14.969928 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:14:14.969928 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:14:14.969928 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:14:14.969928 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:14:14.969928 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:14:14.973254 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:14:14.973254 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:14:14.973254 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:14:14.974963 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:14:14.974963 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:14:14.974963 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 12 18:14:15.083648 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 18:14:15.803655 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:14:15.803655 ignition[1240]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 18:14:15.805778 ignition[1240]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:14:15.809726 ignition[1240]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:14:15.809726 ignition[1240]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 18:14:15.809726 ignition[1240]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 12 18:14:15.811425 ignition[1240]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 18:14:15.811425 ignition[1240]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:14:15.811425 ignition[1240]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:14:15.811425 ignition[1240]: INFO : files: files passed Dec 12 18:14:15.811425 ignition[1240]: INFO : Ignition finished successfully Dec 12 18:14:15.817591 kernel: audit: type=1130 audit(1765563255.812:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.812406 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 18:14:15.814570 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 18:14:15.818019 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 18:14:15.836090 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 18:14:15.836248 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 18:14:15.844708 kernel: audit: type=1130 audit(1765563255.837:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.844737 kernel: audit: type=1131 audit(1765563255.837:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.844814 initrd-setup-root-after-ignition[1276]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:14:15.844814 initrd-setup-root-after-ignition[1276]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:14:15.845787 initrd-setup-root-after-ignition[1280]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:14:15.850271 kernel: audit: type=1130 audit(1765563255.846:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.845375 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:14:15.846521 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 18:14:15.851711 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 18:14:15.886374 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 18:14:15.886497 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 18:14:15.895557 kernel: audit: type=1130 audit(1765563255.887:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.895590 kernel: audit: type=1131 audit(1765563255.887:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.887908 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 18:14:15.896080 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 18:14:15.897318 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 18:14:15.898356 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 18:14:15.915467 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:14:15.920272 kernel: audit: type=1130 audit(1765563255.915:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.917451 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 18:14:15.930932 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:14:15.931127 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:14:15.932506 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:14:15.933536 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 18:14:15.934578 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 18:14:15.939679 kernel: audit: type=1131 audit(1765563255.935:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.934721 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:14:15.939801 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 18:14:15.940946 systemd[1]: Stopped target basic.target - Basic System. Dec 12 18:14:15.941830 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 18:14:15.942706 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:14:15.943620 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 18:14:15.944491 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:14:15.945344 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 18:14:15.946163 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:14:15.947021 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 18:14:15.947859 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 18:14:15.948756 systemd[1]: Stopped target swap.target - Swaps. Dec 12 18:14:15.949609 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 18:14:15.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.949739 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:14:15.950856 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:14:15.951737 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:14:15.952459 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 18:14:15.952551 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:14:15.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.953261 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 18:14:15.953362 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 18:14:15.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.954468 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 18:14:15.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.954561 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:14:15.955390 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 18:14:15.955482 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 18:14:15.956950 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 18:14:15.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.957670 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 18:14:15.957772 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:14:15.959214 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 18:14:15.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.959661 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 18:14:15.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.959764 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:14:15.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.960576 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 18:14:15.960657 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:14:15.961348 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 18:14:15.961426 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:14:15.965350 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 18:14:15.979476 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 18:14:15.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:15.997275 ignition[1301]: INFO : Ignition 2.22.0 Dec 12 18:14:15.997275 ignition[1301]: INFO : Stage: umount Dec 12 18:14:15.998500 ignition[1301]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:14:15.998500 ignition[1301]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 18:14:15.998500 ignition[1301]: INFO : umount: umount passed Dec 12 18:14:15.998500 ignition[1301]: INFO : Ignition finished successfully Dec 12 18:14:15.997719 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 18:14:16.000179 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 18:14:16.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:16.000318 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 18:14:16.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:16.000923 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 18:14:16.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:16.000962 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 18:14:16.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:16.001538 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 18:14:16.001575 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 18:14:16.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:16.002309 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 18:14:16.002356 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 18:14:16.003062 systemd[1]: Stopped target network.target - Network. Dec 12 18:14:16.003743 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 18:14:16.003790 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:14:16.004475 systemd[1]: Stopped target paths.target - Path Units. Dec 12 18:14:16.005133 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 18:14:16.008288 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:14:16.008684 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 18:14:16.009396 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 18:14:16.010096 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 18:14:16.010134 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:14:16.010762 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 18:14:16.010789 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:14:16.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:16.011453 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 12 18:14:16.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:16.011477 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 12 18:14:16.012103 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 18:14:16.012175 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 18:14:16.012828 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 18:14:16.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:16.012862 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 18:14:16.013551 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 18:14:16.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:16.014181 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 18:14:16.015084 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 18:14:16.015164 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 18:14:16.016493 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 18:14:16.016592 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 18:14:16.019945 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 18:14:16.020065 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 18:14:16.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:16.026931 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 18:14:16.027052 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 18:14:16.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:16.029487 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 18:14:16.029892 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 18:14:16.030000 audit: BPF prog-id=6 op=UNLOAD Dec 12 18:14:16.029941 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:14:16.031452 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 18:14:16.031828 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 18:14:16.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:16.031877 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:14:16.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:16.032589 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:14:16.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:16.032624 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:14:16.033239 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 18:14:16.033272 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 18:14:16.033973 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:14:16.043000 audit: BPF prog-id=9 op=UNLOAD Dec 12 18:14:16.049672 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 18:14:16.049817 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:14:16.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:16.050888 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 18:14:16.050925 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 18:14:16.051494 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 18:14:16.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:16.051525 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:14:16.052226 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 18:14:16.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:16.052269 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:14:16.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:16.053354 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 18:14:16.053399 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 18:14:16.054528 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 18:14:16.054565 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:14:16.056209 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 18:14:16.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:16.056884 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 18:14:16.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:16.056936 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:14:16.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:16.057729 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 18:14:16.057776 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:14:16.058484 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:14:16.058527 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:14:16.073477 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 18:14:16.073607 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 18:14:16.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:16.077940 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 18:14:16.078033 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 18:14:16.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:16.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:16.079049 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 18:14:16.080485 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 18:14:16.092369 systemd[1]: Switching root. Dec 12 18:14:16.132479 systemd-journald[409]: Journal stopped Dec 12 18:14:17.184357 systemd-journald[409]: Received SIGTERM from PID 1 (systemd). Dec 12 18:14:17.184446 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 18:14:17.184466 kernel: SELinux: policy capability open_perms=1 Dec 12 18:14:17.184478 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 18:14:17.184495 kernel: SELinux: policy capability always_check_network=0 Dec 12 18:14:17.184516 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 18:14:17.184528 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 18:14:17.184544 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 18:14:17.184555 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 18:14:17.184566 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 18:14:17.184578 systemd[1]: Successfully loaded SELinux policy in 69.909ms. Dec 12 18:14:17.184600 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.818ms. Dec 12 18:14:17.184613 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:14:17.184627 systemd[1]: Detected virtualization kvm. Dec 12 18:14:17.184639 systemd[1]: Detected architecture x86-64. Dec 12 18:14:17.184655 systemd[1]: Detected first boot. Dec 12 18:14:17.184667 systemd[1]: Hostname set to . Dec 12 18:14:17.184682 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 18:14:17.184694 zram_generator::config[1350]: No configuration found. Dec 12 18:14:17.184716 kernel: Guest personality initialized and is inactive Dec 12 18:14:17.184728 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 12 18:14:17.184739 kernel: Initialized host personality Dec 12 18:14:17.184750 kernel: NET: Registered PF_VSOCK protocol family Dec 12 18:14:17.184760 systemd[1]: Populated /etc with preset unit settings. Dec 12 18:14:17.184774 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 18:14:17.184785 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 18:14:17.184797 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 18:14:17.184813 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 18:14:17.184824 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 18:14:17.184838 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 18:14:17.184849 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 18:14:17.184863 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 18:14:17.184875 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 18:14:17.184886 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 18:14:17.184897 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 18:14:17.184908 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:14:17.184920 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:14:17.184931 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 18:14:17.184945 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 18:14:17.184957 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 18:14:17.184968 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:14:17.184980 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 18:14:17.184991 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:14:17.185005 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:14:17.185016 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 18:14:17.185027 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 18:14:17.185038 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 18:14:17.185050 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 18:14:17.185062 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:14:17.185074 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:14:17.185089 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 12 18:14:17.185101 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:14:17.185112 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:14:17.185124 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 18:14:17.185135 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 18:14:17.185147 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 18:14:17.185158 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 18:14:17.185171 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 12 18:14:17.185190 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:14:17.185204 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 12 18:14:17.185216 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 12 18:14:17.185227 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:14:17.185238 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:14:17.185249 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 18:14:17.185262 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 18:14:17.185274 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 18:14:17.185285 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 18:14:17.185297 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:14:17.185308 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 18:14:17.185319 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 18:14:17.185332 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 18:14:17.185349 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 18:14:17.185361 systemd[1]: Reached target machines.target - Containers. Dec 12 18:14:17.185372 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 18:14:17.185384 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:14:17.185395 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:14:17.185406 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 18:14:17.185417 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:14:17.185430 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:14:17.185447 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:14:17.185463 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 18:14:17.185476 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:14:17.185488 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 18:14:17.185500 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 18:14:17.185511 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 18:14:17.185523 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 18:14:17.185534 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 18:14:17.185545 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:14:17.185558 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:14:17.185569 kernel: fuse: init (API version 7.41) Dec 12 18:14:17.185580 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:14:17.185591 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:14:17.185605 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 18:14:17.185616 kernel: ACPI: bus type drm_connector registered Dec 12 18:14:17.185626 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 18:14:17.185639 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:14:17.185651 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:14:17.185662 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 18:14:17.185673 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 18:14:17.185703 systemd-journald[1433]: Collecting audit messages is enabled. Dec 12 18:14:17.185732 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 18:14:17.185747 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 18:14:17.185759 systemd-journald[1433]: Journal started Dec 12 18:14:17.185785 systemd-journald[1433]: Runtime Journal (/run/log/journal/84006ba051b04d1e9e24bfce20c48697) is 8M, max 319.5M, 311.5M free. Dec 12 18:14:17.040000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 12 18:14:17.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.135000 audit: BPF prog-id=14 op=UNLOAD Dec 12 18:14:17.135000 audit: BPF prog-id=13 op=UNLOAD Dec 12 18:14:17.136000 audit: BPF prog-id=15 op=LOAD Dec 12 18:14:17.136000 audit: BPF prog-id=16 op=LOAD Dec 12 18:14:17.136000 audit: BPF prog-id=17 op=LOAD Dec 12 18:14:17.182000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 12 18:14:17.182000 audit[1433]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc2f100df0 a2=4000 a3=0 items=0 ppid=1 pid=1433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:17.182000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 12 18:14:16.955469 systemd[1]: Queued start job for default target multi-user.target. Dec 12 18:14:16.977541 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 18:14:16.978132 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 18:14:17.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.188200 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:14:17.188797 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 18:14:17.189311 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 18:14:17.189982 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 18:14:17.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.190652 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:14:17.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.191282 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 18:14:17.191420 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 18:14:17.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.192041 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:14:17.192206 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:14:17.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.192816 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:14:17.192943 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:14:17.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.193548 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:14:17.193683 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:14:17.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.194284 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 18:14:17.194409 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 18:14:17.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.194993 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:14:17.195120 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:14:17.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.195838 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:14:17.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.196570 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:14:17.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.197830 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 18:14:17.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.198775 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 18:14:17.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.208592 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:14:17.210086 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 12 18:14:17.211720 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 18:14:17.213061 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 18:14:17.213519 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 18:14:17.213549 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:14:17.214819 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 18:14:17.216600 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:14:17.216703 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 18:14:17.240230 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 18:14:17.241688 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 18:14:17.242222 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:14:17.243073 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 18:14:17.243580 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:14:17.244500 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:14:17.245757 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 18:14:17.247049 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 18:14:17.250799 systemd-journald[1433]: Time spent on flushing to /var/log/journal/84006ba051b04d1e9e24bfce20c48697 is 20.719ms for 1840 entries. Dec 12 18:14:17.250799 systemd-journald[1433]: System Journal (/var/log/journal/84006ba051b04d1e9e24bfce20c48697) is 8M, max 588.1M, 580.1M free. Dec 12 18:14:17.291704 systemd-journald[1433]: Received client request to flush runtime journal. Dec 12 18:14:17.291765 kernel: loop1: detected capacity change from 0 to 119256 Dec 12 18:14:17.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.248814 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 18:14:17.249413 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 18:14:17.259847 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 18:14:17.261098 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 18:14:17.262661 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 18:14:17.268380 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:14:17.281508 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:14:17.293614 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 18:14:17.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.299206 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 18:14:17.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.300000 audit: BPF prog-id=18 op=LOAD Dec 12 18:14:17.300000 audit: BPF prog-id=19 op=LOAD Dec 12 18:14:17.300000 audit: BPF prog-id=20 op=LOAD Dec 12 18:14:17.301659 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 12 18:14:17.302000 audit: BPF prog-id=21 op=LOAD Dec 12 18:14:17.303420 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:14:17.304874 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:14:17.315552 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 18:14:17.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.317000 audit: BPF prog-id=22 op=LOAD Dec 12 18:14:17.317000 audit: BPF prog-id=23 op=LOAD Dec 12 18:14:17.317000 audit: BPF prog-id=24 op=LOAD Dec 12 18:14:17.318946 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 12 18:14:17.319259 kernel: loop2: detected capacity change from 0 to 111544 Dec 12 18:14:17.320000 audit: BPF prog-id=25 op=LOAD Dec 12 18:14:17.320000 audit: BPF prog-id=26 op=LOAD Dec 12 18:14:17.320000 audit: BPF prog-id=27 op=LOAD Dec 12 18:14:17.321284 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 18:14:17.332963 systemd-tmpfiles[1496]: ACLs are not supported, ignoring. Dec 12 18:14:17.332984 systemd-tmpfiles[1496]: ACLs are not supported, ignoring. Dec 12 18:14:17.336279 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:14:17.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.359850 systemd-nsresourced[1500]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 12 18:14:17.360432 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 18:14:17.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.361129 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 12 18:14:17.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.387222 kernel: loop3: detected capacity change from 0 to 229808 Dec 12 18:14:17.420040 systemd-oomd[1494]: No swap; memory pressure usage will be degraded Dec 12 18:14:17.420579 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 12 18:14:17.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.434011 systemd-resolved[1495]: Positive Trust Anchors: Dec 12 18:14:17.434032 systemd-resolved[1495]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:14:17.434036 systemd-resolved[1495]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 18:14:17.434067 systemd-resolved[1495]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:14:17.452414 systemd-resolved[1495]: Using system hostname 'ci-4515-1-0-e-14f87f00b0'. Dec 12 18:14:17.453846 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:14:17.454222 kernel: loop4: detected capacity change from 0 to 1656 Dec 12 18:14:17.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.454564 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:14:17.487238 kernel: loop5: detected capacity change from 0 to 119256 Dec 12 18:14:17.514254 kernel: loop6: detected capacity change from 0 to 111544 Dec 12 18:14:17.540246 kernel: loop7: detected capacity change from 0 to 229808 Dec 12 18:14:17.575245 kernel: loop1: detected capacity change from 0 to 1656 Dec 12 18:14:17.582262 (sd-merge)[1520]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-stackit.raw'. Dec 12 18:14:17.586966 (sd-merge)[1520]: Merged extensions into '/usr'. Dec 12 18:14:17.593557 systemd[1]: Reload requested from client PID 1478 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 18:14:17.593580 systemd[1]: Reloading... Dec 12 18:14:17.640273 zram_generator::config[1553]: No configuration found. Dec 12 18:14:17.813596 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 18:14:17.813855 systemd[1]: Reloading finished in 219 ms. Dec 12 18:14:17.845681 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 18:14:17.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.846612 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 18:14:17.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:17.863721 systemd[1]: Starting ensure-sysext.service... Dec 12 18:14:17.865630 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:14:17.866000 audit: BPF prog-id=8 op=UNLOAD Dec 12 18:14:17.866000 audit: BPF prog-id=7 op=UNLOAD Dec 12 18:14:17.866000 audit: BPF prog-id=28 op=LOAD Dec 12 18:14:17.866000 audit: BPF prog-id=29 op=LOAD Dec 12 18:14:17.867569 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:14:17.868000 audit: BPF prog-id=30 op=LOAD Dec 12 18:14:17.868000 audit: BPF prog-id=18 op=UNLOAD Dec 12 18:14:17.868000 audit: BPF prog-id=31 op=LOAD Dec 12 18:14:17.868000 audit: BPF prog-id=32 op=LOAD Dec 12 18:14:17.868000 audit: BPF prog-id=19 op=UNLOAD Dec 12 18:14:17.868000 audit: BPF prog-id=20 op=UNLOAD Dec 12 18:14:17.869000 audit: BPF prog-id=33 op=LOAD Dec 12 18:14:17.869000 audit: BPF prog-id=15 op=UNLOAD Dec 12 18:14:17.869000 audit: BPF prog-id=34 op=LOAD Dec 12 18:14:17.869000 audit: BPF prog-id=35 op=LOAD Dec 12 18:14:17.869000 audit: BPF prog-id=16 op=UNLOAD Dec 12 18:14:17.869000 audit: BPF prog-id=17 op=UNLOAD Dec 12 18:14:17.870000 audit: BPF prog-id=36 op=LOAD Dec 12 18:14:17.870000 audit: BPF prog-id=21 op=UNLOAD Dec 12 18:14:17.870000 audit: BPF prog-id=37 op=LOAD Dec 12 18:14:17.870000 audit: BPF prog-id=22 op=UNLOAD Dec 12 18:14:17.870000 audit: BPF prog-id=38 op=LOAD Dec 12 18:14:17.870000 audit: BPF prog-id=39 op=LOAD Dec 12 18:14:17.870000 audit: BPF prog-id=23 op=UNLOAD Dec 12 18:14:17.870000 audit: BPF prog-id=24 op=UNLOAD Dec 12 18:14:17.871000 audit: BPF prog-id=40 op=LOAD Dec 12 18:14:17.871000 audit: BPF prog-id=25 op=UNLOAD Dec 12 18:14:17.871000 audit: BPF prog-id=41 op=LOAD Dec 12 18:14:17.871000 audit: BPF prog-id=42 op=LOAD Dec 12 18:14:17.871000 audit: BPF prog-id=26 op=UNLOAD Dec 12 18:14:17.871000 audit: BPF prog-id=27 op=UNLOAD Dec 12 18:14:17.876505 systemd[1]: Reload requested from client PID 1593 ('systemctl') (unit ensure-sysext.service)... Dec 12 18:14:17.876523 systemd[1]: Reloading... Dec 12 18:14:17.883005 systemd-tmpfiles[1594]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 18:14:17.883035 systemd-tmpfiles[1594]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 18:14:17.883302 systemd-tmpfiles[1594]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 18:14:17.884345 systemd-tmpfiles[1594]: ACLs are not supported, ignoring. Dec 12 18:14:17.884445 systemd-tmpfiles[1594]: ACLs are not supported, ignoring. Dec 12 18:14:17.891900 systemd-tmpfiles[1594]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:14:17.891913 systemd-tmpfiles[1594]: Skipping /boot Dec 12 18:14:17.893165 systemd-udevd[1595]: Using default interface naming scheme 'v257'. Dec 12 18:14:17.899605 systemd-tmpfiles[1594]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:14:17.899619 systemd-tmpfiles[1594]: Skipping /boot Dec 12 18:14:17.915310 zram_generator::config[1627]: No configuration found. Dec 12 18:14:18.003214 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input5 Dec 12 18:14:18.022210 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Dec 12 18:14:18.025213 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 18:14:18.026548 kernel: Console: switching to colour dummy device 80x25 Dec 12 18:14:18.027205 kernel: ACPI: button: Power Button [PWRF] Dec 12 18:14:18.029314 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Dec 12 18:14:18.031420 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 12 18:14:18.031476 kernel: [drm] features: -context_init Dec 12 18:14:18.127215 kernel: [drm] number of scanouts: 1 Dec 12 18:14:18.127317 kernel: [drm] number of cap sets: 0 Dec 12 18:14:18.130208 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Dec 12 18:14:18.133177 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 12 18:14:18.134203 kernel: Console: switching to colour frame buffer device 160x50 Dec 12 18:14:18.138215 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 12 18:14:18.151288 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 12 18:14:18.151625 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 12 18:14:18.151781 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 12 18:14:18.194141 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 18:14:18.194607 systemd[1]: Reloading finished in 317 ms. Dec 12 18:14:18.210402 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:14:18.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:18.211000 audit: BPF prog-id=43 op=LOAD Dec 12 18:14:18.211000 audit: BPF prog-id=40 op=UNLOAD Dec 12 18:14:18.211000 audit: BPF prog-id=44 op=LOAD Dec 12 18:14:18.211000 audit: BPF prog-id=45 op=LOAD Dec 12 18:14:18.211000 audit: BPF prog-id=41 op=UNLOAD Dec 12 18:14:18.212000 audit: BPF prog-id=42 op=UNLOAD Dec 12 18:14:18.212000 audit: BPF prog-id=46 op=LOAD Dec 12 18:14:18.212000 audit: BPF prog-id=36 op=UNLOAD Dec 12 18:14:18.212000 audit: BPF prog-id=47 op=LOAD Dec 12 18:14:18.212000 audit: BPF prog-id=37 op=UNLOAD Dec 12 18:14:18.213000 audit: BPF prog-id=48 op=LOAD Dec 12 18:14:18.213000 audit: BPF prog-id=49 op=LOAD Dec 12 18:14:18.213000 audit: BPF prog-id=38 op=UNLOAD Dec 12 18:14:18.213000 audit: BPF prog-id=39 op=UNLOAD Dec 12 18:14:18.214000 audit: BPF prog-id=50 op=LOAD Dec 12 18:14:18.214000 audit: BPF prog-id=33 op=UNLOAD Dec 12 18:14:18.214000 audit: BPF prog-id=51 op=LOAD Dec 12 18:14:18.214000 audit: BPF prog-id=52 op=LOAD Dec 12 18:14:18.214000 audit: BPF prog-id=34 op=UNLOAD Dec 12 18:14:18.214000 audit: BPF prog-id=35 op=UNLOAD Dec 12 18:14:18.215000 audit: BPF prog-id=53 op=LOAD Dec 12 18:14:18.215000 audit: BPF prog-id=30 op=UNLOAD Dec 12 18:14:18.215000 audit: BPF prog-id=54 op=LOAD Dec 12 18:14:18.215000 audit: BPF prog-id=55 op=LOAD Dec 12 18:14:18.215000 audit: BPF prog-id=31 op=UNLOAD Dec 12 18:14:18.215000 audit: BPF prog-id=32 op=UNLOAD Dec 12 18:14:18.215000 audit: BPF prog-id=56 op=LOAD Dec 12 18:14:18.215000 audit: BPF prog-id=57 op=LOAD Dec 12 18:14:18.237000 audit: BPF prog-id=28 op=UNLOAD Dec 12 18:14:18.237000 audit: BPF prog-id=29 op=UNLOAD Dec 12 18:14:18.239569 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:14:18.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:18.266377 systemd[1]: Finished ensure-sysext.service. Dec 12 18:14:18.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:18.282738 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 18:14:18.286850 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:14:18.288078 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:14:18.291625 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 18:14:18.291907 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:14:18.314109 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:14:18.316373 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:14:18.318102 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:14:18.320043 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:14:18.321645 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Dec 12 18:14:18.322305 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:14:18.322414 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 18:14:18.323462 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 18:14:18.324983 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 18:14:18.325619 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:14:18.326590 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 18:14:18.328931 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:14:18.330041 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 18:14:18.328000 audit: BPF prog-id=58 op=LOAD Dec 12 18:14:18.331405 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 18:14:18.333342 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:14:18.334041 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:14:18.335117 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:14:18.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:18.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:18.335332 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:14:18.336111 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:14:18.336845 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:14:18.338202 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 12 18:14:18.338244 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 12 18:14:18.339000 audit[1747]: SYSTEM_BOOT pid=1747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 12 18:14:18.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:18.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:18.340012 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:14:18.340215 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:14:18.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:18.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:18.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:18.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:18.340896 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:14:18.341057 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:14:18.344283 kernel: PTP clock support registered Dec 12 18:14:18.347837 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Dec 12 18:14:18.348048 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Dec 12 18:14:18.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@ptp_kvm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:18.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@ptp_kvm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:18.349389 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 18:14:18.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:18.351939 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:14:18.352040 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:14:18.353356 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 18:14:18.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:18.374493 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 18:14:18.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:18.377000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 12 18:14:18.377000 audit[1774]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff9ccd14b0 a2=420 a3=0 items=0 ppid=1729 pid=1774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:18.377000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 18:14:18.378345 augenrules[1774]: No rules Dec 12 18:14:18.380754 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:14:18.381036 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:14:18.419514 systemd-networkd[1745]: lo: Link UP Dec 12 18:14:18.419525 systemd-networkd[1745]: lo: Gained carrier Dec 12 18:14:18.421055 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:14:18.421121 systemd-networkd[1745]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 18:14:18.421126 systemd-networkd[1745]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:14:18.422319 systemd[1]: Reached target network.target - Network. Dec 12 18:14:18.422455 systemd-networkd[1745]: eth0: Link UP Dec 12 18:14:18.422766 systemd-networkd[1745]: eth0: Gained carrier Dec 12 18:14:18.422785 systemd-networkd[1745]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 18:14:18.424097 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 18:14:18.427322 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 18:14:18.444522 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:14:18.453286 systemd-networkd[1745]: eth0: DHCPv4 address 10.0.8.19/25, gateway 10.0.8.1 acquired from 10.0.8.1 Dec 12 18:14:18.455780 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 18:14:18.458208 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 18:14:18.459629 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:14:19.075090 ldconfig[1737]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 18:14:19.079571 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 18:14:19.082468 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 18:14:19.108210 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 18:14:19.109687 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:14:19.110311 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 18:14:19.110752 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 18:14:19.111149 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 18:14:19.113436 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 18:14:19.113969 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 18:14:19.114426 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 12 18:14:19.114865 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 12 18:14:19.115257 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 18:14:19.115624 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 18:14:19.115654 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:14:19.116001 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:14:19.120765 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 18:14:19.122813 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 18:14:19.125360 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 18:14:19.126287 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 18:14:19.126862 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 18:14:19.134328 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 18:14:19.136252 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 18:14:19.137412 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 18:14:19.138837 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:14:19.139397 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:14:19.140018 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:14:19.140113 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:14:19.143458 systemd[1]: Starting chronyd.service - NTP client/server... Dec 12 18:14:19.146543 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 18:14:19.151162 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 18:14:19.153890 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 18:14:19.158988 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 18:14:19.161154 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 18:14:19.163316 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 18:14:19.163245 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 18:14:19.167448 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 18:14:19.168747 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 18:14:19.170943 jq[1805]: false Dec 12 18:14:19.172730 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 18:14:19.178049 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 18:14:19.180705 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 18:14:19.182240 google_oslogin_nss_cache[1808]: oslogin_cache_refresh[1808]: Refreshing passwd entry cache Dec 12 18:14:19.182251 oslogin_cache_refresh[1808]: Refreshing passwd entry cache Dec 12 18:14:19.182554 extend-filesystems[1807]: Found /dev/vda6 Dec 12 18:14:19.182558 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 18:14:19.185515 chronyd[1799]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 12 18:14:19.186326 chronyd[1799]: Loaded seccomp filter (level 2) Dec 12 18:14:19.188465 extend-filesystems[1807]: Found /dev/vda9 Dec 12 18:14:19.190666 extend-filesystems[1807]: Checking size of /dev/vda9 Dec 12 18:14:19.190874 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 18:14:19.191464 oslogin_cache_refresh[1808]: Failure getting users, quitting Dec 12 18:14:19.191955 google_oslogin_nss_cache[1808]: oslogin_cache_refresh[1808]: Failure getting users, quitting Dec 12 18:14:19.191955 google_oslogin_nss_cache[1808]: oslogin_cache_refresh[1808]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:14:19.191955 google_oslogin_nss_cache[1808]: oslogin_cache_refresh[1808]: Refreshing group entry cache Dec 12 18:14:19.191484 oslogin_cache_refresh[1808]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:14:19.191531 oslogin_cache_refresh[1808]: Refreshing group entry cache Dec 12 18:14:19.192584 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 18:14:19.193162 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 18:14:19.193874 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 18:14:19.196059 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 18:14:19.200808 google_oslogin_nss_cache[1808]: oslogin_cache_refresh[1808]: Failure getting groups, quitting Dec 12 18:14:19.200808 google_oslogin_nss_cache[1808]: oslogin_cache_refresh[1808]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:14:19.199487 oslogin_cache_refresh[1808]: Failure getting groups, quitting Dec 12 18:14:19.199503 oslogin_cache_refresh[1808]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:14:19.204581 systemd[1]: Started chronyd.service - NTP client/server. Dec 12 18:14:19.204806 extend-filesystems[1807]: Resized partition /dev/vda9 Dec 12 18:14:19.211255 extend-filesystems[1835]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 18:14:19.211879 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 18:14:19.212342 jq[1830]: true Dec 12 18:14:19.213932 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 18:14:19.214193 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 18:14:19.214457 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 18:14:19.214642 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 18:14:19.215251 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 18:14:19.215434 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 18:14:19.219844 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 11516923 blocks Dec 12 18:14:19.216736 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 18:14:19.216925 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 18:14:19.222420 update_engine[1829]: I20251212 18:14:19.222342 1829 main.cc:92] Flatcar Update Engine starting Dec 12 18:14:19.253966 jq[1845]: true Dec 12 18:14:19.260900 tar[1841]: linux-amd64/LICENSE Dec 12 18:14:19.261969 tar[1841]: linux-amd64/helm Dec 12 18:14:19.262328 systemd-logind[1824]: New seat seat0. Dec 12 18:14:19.263854 systemd-logind[1824]: Watching system buttons on /dev/input/event3 (Power Button) Dec 12 18:14:19.263877 systemd-logind[1824]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 12 18:14:19.264621 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 18:14:19.287376 dbus-daemon[1802]: [system] SELinux support is enabled Dec 12 18:14:19.287701 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 18:14:19.291849 dbus-daemon[1802]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 12 18:14:19.390744 update_engine[1829]: I20251212 18:14:19.290146 1829 update_check_scheduler.cc:74] Next update check in 4m20s Dec 12 18:14:19.290963 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 18:14:19.290985 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 18:14:19.294507 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 18:14:19.294529 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 18:14:19.295045 systemd[1]: Started update-engine.service - Update Engine. Dec 12 18:14:19.297404 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 18:14:19.345267 locksmithd[1873]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 18:14:19.391548 sshd_keygen[1831]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 18:14:19.413697 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 18:14:19.416227 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 18:14:19.435325 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 18:14:19.437615 systemd[1]: Started sshd@0-10.0.8.19:22-139.178.89.65:52994.service - OpenSSH per-connection server daemon (139.178.89.65:52994). Dec 12 18:14:19.439435 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 18:14:19.439651 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 18:14:19.442224 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 18:14:19.453491 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 18:14:19.456836 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 18:14:19.458722 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 18:14:19.459937 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 18:14:19.478434 containerd[1848]: time="2025-12-12T18:14:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 18:14:19.478434 containerd[1848]: time="2025-12-12T18:14:19.478209426Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 12 18:14:19.479207 bash[1872]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:14:19.483903 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 18:14:19.487296 systemd[1]: Starting sshkeys.service... Dec 12 18:14:19.490972 containerd[1848]: time="2025-12-12T18:14:19.490927385Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.607µs" Dec 12 18:14:19.491067 containerd[1848]: time="2025-12-12T18:14:19.491051133Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 18:14:19.491150 containerd[1848]: time="2025-12-12T18:14:19.491138605Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 18:14:19.491222 containerd[1848]: time="2025-12-12T18:14:19.491210560Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 18:14:19.492360 containerd[1848]: time="2025-12-12T18:14:19.492230518Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 18:14:19.492412 containerd[1848]: time="2025-12-12T18:14:19.492378592Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:14:19.492528 containerd[1848]: time="2025-12-12T18:14:19.492476374Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:14:19.492556 containerd[1848]: time="2025-12-12T18:14:19.492532170Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:14:19.492828 containerd[1848]: time="2025-12-12T18:14:19.492806423Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:14:19.492870 containerd[1848]: time="2025-12-12T18:14:19.492827923Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:14:19.492870 containerd[1848]: time="2025-12-12T18:14:19.492840896Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:14:19.492870 containerd[1848]: time="2025-12-12T18:14:19.492849492Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 18:14:19.493001 containerd[1848]: time="2025-12-12T18:14:19.492985661Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 18:14:19.493034 containerd[1848]: time="2025-12-12T18:14:19.492999963Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 18:14:19.493070 containerd[1848]: time="2025-12-12T18:14:19.493057790Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 18:14:19.493239 containerd[1848]: time="2025-12-12T18:14:19.493222866Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:14:19.493267 containerd[1848]: time="2025-12-12T18:14:19.493251414Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:14:19.493267 containerd[1848]: time="2025-12-12T18:14:19.493260414Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 18:14:19.493316 containerd[1848]: time="2025-12-12T18:14:19.493288464Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 18:14:19.493539 containerd[1848]: time="2025-12-12T18:14:19.493518151Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 18:14:19.493598 containerd[1848]: time="2025-12-12T18:14:19.493584019Z" level=info msg="metadata content store policy set" policy=shared Dec 12 18:14:19.514854 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 18:14:19.517720 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 18:14:19.538053 containerd[1848]: time="2025-12-12T18:14:19.537998853Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 18:14:19.538237 containerd[1848]: time="2025-12-12T18:14:19.538084618Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 18:14:19.538282 containerd[1848]: time="2025-12-12T18:14:19.538240436Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 18:14:19.538282 containerd[1848]: time="2025-12-12T18:14:19.538254848Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 18:14:19.538282 containerd[1848]: time="2025-12-12T18:14:19.538268933Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 18:14:19.538282 containerd[1848]: time="2025-12-12T18:14:19.538283192Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 18:14:19.538356 containerd[1848]: time="2025-12-12T18:14:19.538308608Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 18:14:19.538356 containerd[1848]: time="2025-12-12T18:14:19.538325698Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 18:14:19.538356 containerd[1848]: time="2025-12-12T18:14:19.538338782Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 18:14:19.538356 containerd[1848]: time="2025-12-12T18:14:19.538350966Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 18:14:19.538425 containerd[1848]: time="2025-12-12T18:14:19.538361568Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 18:14:19.538425 containerd[1848]: time="2025-12-12T18:14:19.538372681Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 18:14:19.538425 containerd[1848]: time="2025-12-12T18:14:19.538382090Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 18:14:19.538425 containerd[1848]: time="2025-12-12T18:14:19.538397075Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 18:14:19.538551 containerd[1848]: time="2025-12-12T18:14:19.538534034Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 18:14:19.538586 containerd[1848]: time="2025-12-12T18:14:19.538556472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 18:14:19.538609 containerd[1848]: time="2025-12-12T18:14:19.538587988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 18:14:19.538609 containerd[1848]: time="2025-12-12T18:14:19.538605659Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 18:14:19.538659 containerd[1848]: time="2025-12-12T18:14:19.538617934Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 18:14:19.538659 containerd[1848]: time="2025-12-12T18:14:19.538627904Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 18:14:19.538659 containerd[1848]: time="2025-12-12T18:14:19.538640099Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 18:14:19.538659 containerd[1848]: time="2025-12-12T18:14:19.538650913Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 18:14:19.538737 containerd[1848]: time="2025-12-12T18:14:19.538662552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 18:14:19.538737 containerd[1848]: time="2025-12-12T18:14:19.538673672Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 18:14:19.538737 containerd[1848]: time="2025-12-12T18:14:19.538683236Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 18:14:19.538737 containerd[1848]: time="2025-12-12T18:14:19.538708469Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 18:14:19.538828 containerd[1848]: time="2025-12-12T18:14:19.538755659Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 18:14:19.538828 containerd[1848]: time="2025-12-12T18:14:19.538770658Z" level=info msg="Start snapshots syncer" Dec 12 18:14:19.538828 containerd[1848]: time="2025-12-12T18:14:19.538795972Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 18:14:19.539138 containerd[1848]: time="2025-12-12T18:14:19.539103991Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 18:14:19.539317 containerd[1848]: time="2025-12-12T18:14:19.539151924Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 18:14:19.539317 containerd[1848]: time="2025-12-12T18:14:19.539220713Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 18:14:19.539368 containerd[1848]: time="2025-12-12T18:14:19.539329596Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 18:14:19.539368 containerd[1848]: time="2025-12-12T18:14:19.539350474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 18:14:19.539368 containerd[1848]: time="2025-12-12T18:14:19.539360701Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 18:14:19.539418 containerd[1848]: time="2025-12-12T18:14:19.539369757Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 18:14:19.539418 containerd[1848]: time="2025-12-12T18:14:19.539382208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 18:14:19.539418 containerd[1848]: time="2025-12-12T18:14:19.539392888Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 18:14:19.539418 containerd[1848]: time="2025-12-12T18:14:19.539403376Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 18:14:19.539418 containerd[1848]: time="2025-12-12T18:14:19.539412695Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 18:14:19.539580 containerd[1848]: time="2025-12-12T18:14:19.539422542Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 18:14:19.539580 containerd[1848]: time="2025-12-12T18:14:19.539457361Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:14:19.539580 containerd[1848]: time="2025-12-12T18:14:19.539470641Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:14:19.539580 containerd[1848]: time="2025-12-12T18:14:19.539479774Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:14:19.539580 containerd[1848]: time="2025-12-12T18:14:19.539489711Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:14:19.539580 containerd[1848]: time="2025-12-12T18:14:19.539496622Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 18:14:19.539580 containerd[1848]: time="2025-12-12T18:14:19.539505115Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 18:14:19.539580 containerd[1848]: time="2025-12-12T18:14:19.539523837Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 18:14:19.539580 containerd[1848]: time="2025-12-12T18:14:19.539573355Z" level=info msg="runtime interface created" Dec 12 18:14:19.539580 containerd[1848]: time="2025-12-12T18:14:19.539580470Z" level=info msg="created NRI interface" Dec 12 18:14:19.539821 containerd[1848]: time="2025-12-12T18:14:19.539589746Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 18:14:19.539821 containerd[1848]: time="2025-12-12T18:14:19.539610658Z" level=info msg="Connect containerd service" Dec 12 18:14:19.539821 containerd[1848]: time="2025-12-12T18:14:19.539632979Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 18:14:19.540223 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 18:14:19.540471 containerd[1848]: time="2025-12-12T18:14:19.540448278Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:14:19.638028 containerd[1848]: time="2025-12-12T18:14:19.637978978Z" level=info msg="Start subscribing containerd event" Dec 12 18:14:19.638028 containerd[1848]: time="2025-12-12T18:14:19.638031234Z" level=info msg="Start recovering state" Dec 12 18:14:19.638199 containerd[1848]: time="2025-12-12T18:14:19.638146664Z" level=info msg="Start event monitor" Dec 12 18:14:19.638199 containerd[1848]: time="2025-12-12T18:14:19.638159605Z" level=info msg="Start cni network conf syncer for default" Dec 12 18:14:19.638199 containerd[1848]: time="2025-12-12T18:14:19.638168065Z" level=info msg="Start streaming server" Dec 12 18:14:19.638199 containerd[1848]: time="2025-12-12T18:14:19.638176150Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 18:14:19.638199 containerd[1848]: time="2025-12-12T18:14:19.638196320Z" level=info msg="runtime interface starting up..." Dec 12 18:14:19.638323 containerd[1848]: time="2025-12-12T18:14:19.638206037Z" level=info msg="starting plugins..." Dec 12 18:14:19.638323 containerd[1848]: time="2025-12-12T18:14:19.638224056Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 18:14:19.638802 containerd[1848]: time="2025-12-12T18:14:19.638719937Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 18:14:19.638802 containerd[1848]: time="2025-12-12T18:14:19.638790262Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 18:14:19.639027 containerd[1848]: time="2025-12-12T18:14:19.638974266Z" level=info msg="containerd successfully booted in 0.172215s" Dec 12 18:14:19.639245 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 18:14:19.718261 kernel: EXT4-fs (vda9): resized filesystem to 11516923 Dec 12 18:14:19.754286 extend-filesystems[1835]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 18:14:19.754286 extend-filesystems[1835]: old_desc_blocks = 1, new_desc_blocks = 6 Dec 12 18:14:19.754286 extend-filesystems[1835]: The filesystem on /dev/vda9 is now 11516923 (4k) blocks long. Dec 12 18:14:19.762303 extend-filesystems[1807]: Resized filesystem in /dev/vda9 Dec 12 18:14:19.756755 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 18:14:19.757896 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 18:14:19.780958 tar[1841]: linux-amd64/README.md Dec 12 18:14:19.813600 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 18:14:19.826399 systemd-networkd[1745]: eth0: Gained IPv6LL Dec 12 18:14:19.828916 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 18:14:19.831078 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 18:14:19.832972 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:14:19.835776 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 18:14:19.878599 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 18:14:20.176238 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 18:14:20.287316 sshd[1895]: Accepted publickey for core from 139.178.89.65 port 52994 ssh2: RSA SHA256:P03RT4f8kIelvaSNmaAKLm8NQmzoiYdI93bUyqHqPCw Dec 12 18:14:20.289555 sshd-session[1895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:14:20.296424 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 18:14:20.299229 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 18:14:20.306680 systemd-logind[1824]: New session 1 of user core. Dec 12 18:14:20.327474 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 18:14:20.332227 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 18:14:20.356960 (systemd)[1947]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 18:14:20.359542 systemd-logind[1824]: New session c1 of user core. Dec 12 18:14:20.472046 systemd[1947]: Queued start job for default target default.target. Dec 12 18:14:20.492371 systemd[1947]: Created slice app.slice - User Application Slice. Dec 12 18:14:20.492413 systemd[1947]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 12 18:14:20.492427 systemd[1947]: Reached target paths.target - Paths. Dec 12 18:14:20.492477 systemd[1947]: Reached target timers.target - Timers. Dec 12 18:14:20.493655 systemd[1947]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 18:14:20.494397 systemd[1947]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 12 18:14:20.504084 systemd[1947]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 18:14:20.504373 systemd[1947]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 12 18:14:20.504510 systemd[1947]: Reached target sockets.target - Sockets. Dec 12 18:14:20.504551 systemd[1947]: Reached target basic.target - Basic System. Dec 12 18:14:20.504584 systemd[1947]: Reached target default.target - Main User Target. Dec 12 18:14:20.504613 systemd[1947]: Startup finished in 139ms. Dec 12 18:14:20.504795 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 18:14:20.507135 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 18:14:20.553225 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 18:14:21.004885 systemd[1]: Started sshd@1-10.0.8.19:22-139.178.89.65:53002.service - OpenSSH per-connection server daemon (139.178.89.65:53002). Dec 12 18:14:21.102556 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:14:21.106600 (kubelet)[1969]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:14:21.835977 sshd[1961]: Accepted publickey for core from 139.178.89.65 port 53002 ssh2: RSA SHA256:P03RT4f8kIelvaSNmaAKLm8NQmzoiYdI93bUyqHqPCw Dec 12 18:14:21.837730 sshd-session[1961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:14:21.843140 systemd-logind[1824]: New session 2 of user core. Dec 12 18:14:21.854395 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 18:14:21.906693 kubelet[1969]: E1212 18:14:21.906608 1969 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:14:21.909084 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:14:21.909234 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:14:21.909641 systemd[1]: kubelet.service: Consumed 1.065s CPU time, 271.4M memory peak. Dec 12 18:14:22.191364 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 18:14:22.307591 sshd[1980]: Connection closed by 139.178.89.65 port 53002 Dec 12 18:14:22.307918 sshd-session[1961]: pam_unix(sshd:session): session closed for user core Dec 12 18:14:22.310857 systemd-logind[1824]: Session 2 logged out. Waiting for processes to exit. Dec 12 18:14:22.311146 systemd[1]: sshd@1-10.0.8.19:22-139.178.89.65:53002.service: Deactivated successfully. Dec 12 18:14:22.312767 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 18:14:22.314593 systemd-logind[1824]: Removed session 2. Dec 12 18:14:22.488731 systemd[1]: Started sshd@2-10.0.8.19:22-139.178.89.65:53018.service - OpenSSH per-connection server daemon (139.178.89.65:53018). Dec 12 18:14:22.568219 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 18:14:23.355074 sshd[1988]: Accepted publickey for core from 139.178.89.65 port 53018 ssh2: RSA SHA256:P03RT4f8kIelvaSNmaAKLm8NQmzoiYdI93bUyqHqPCw Dec 12 18:14:23.356477 sshd-session[1988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:14:23.360848 systemd-logind[1824]: New session 3 of user core. Dec 12 18:14:23.376524 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 18:14:23.845428 sshd[1992]: Connection closed by 139.178.89.65 port 53018 Dec 12 18:14:23.845881 sshd-session[1988]: pam_unix(sshd:session): session closed for user core Dec 12 18:14:23.849371 systemd[1]: sshd@2-10.0.8.19:22-139.178.89.65:53018.service: Deactivated successfully. Dec 12 18:14:23.851101 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 18:14:23.852881 systemd-logind[1824]: Session 3 logged out. Waiting for processes to exit. Dec 12 18:14:23.853669 systemd-logind[1824]: Removed session 3. Dec 12 18:14:26.204260 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 18:14:26.211092 coreos-metadata[1801]: Dec 12 18:14:26.210 WARN failed to locate config-drive, using the metadata service API instead Dec 12 18:14:26.226268 coreos-metadata[1801]: Dec 12 18:14:26.226 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 12 18:14:26.507668 coreos-metadata[1801]: Dec 12 18:14:26.507 INFO Fetch successful Dec 12 18:14:26.507668 coreos-metadata[1801]: Dec 12 18:14:26.507 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 12 18:14:26.577298 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 18:14:26.597612 coreos-metadata[1911]: Dec 12 18:14:26.597 WARN failed to locate config-drive, using the metadata service API instead Dec 12 18:14:26.609621 coreos-metadata[1911]: Dec 12 18:14:26.609 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 12 18:14:27.764870 coreos-metadata[1801]: Dec 12 18:14:27.764 INFO Fetch successful Dec 12 18:14:27.764870 coreos-metadata[1801]: Dec 12 18:14:27.764 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 12 18:14:27.889781 coreos-metadata[1911]: Dec 12 18:14:27.889 INFO Fetch successful Dec 12 18:14:27.889781 coreos-metadata[1911]: Dec 12 18:14:27.889 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 12 18:14:27.995571 coreos-metadata[1801]: Dec 12 18:14:27.995 INFO Fetch successful Dec 12 18:14:27.995571 coreos-metadata[1801]: Dec 12 18:14:27.995 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 12 18:14:28.111668 coreos-metadata[1911]: Dec 12 18:14:28.111 INFO Fetch successful Dec 12 18:14:28.119616 unknown[1911]: wrote ssh authorized keys file for user: core Dec 12 18:14:28.141470 coreos-metadata[1801]: Dec 12 18:14:28.141 INFO Fetch successful Dec 12 18:14:28.141470 coreos-metadata[1801]: Dec 12 18:14:28.141 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 12 18:14:28.153294 update-ssh-keys[2005]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:14:28.154558 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 18:14:28.156478 systemd[1]: Finished sshkeys.service. Dec 12 18:14:28.260954 coreos-metadata[1801]: Dec 12 18:14:28.260 INFO Fetch successful Dec 12 18:14:28.260954 coreos-metadata[1801]: Dec 12 18:14:28.260 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 12 18:14:28.375921 coreos-metadata[1801]: Dec 12 18:14:28.375 INFO Fetch successful Dec 12 18:14:28.420569 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 18:14:28.421041 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 18:14:28.421176 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 18:14:28.426299 systemd[1]: Startup finished in 4.005s (kernel) + 12.528s (initrd) + 12.196s (userspace) = 28.730s. Dec 12 18:14:32.073606 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 18:14:32.075148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:14:32.224810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:14:32.228822 (kubelet)[2022]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:14:32.268447 kubelet[2022]: E1212 18:14:32.268389 2022 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:14:32.273974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:14:32.274104 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:14:32.274475 systemd[1]: kubelet.service: Consumed 155ms CPU time, 112.8M memory peak. Dec 12 18:14:34.012766 systemd[1]: Started sshd@3-10.0.8.19:22-139.178.89.65:43138.service - OpenSSH per-connection server daemon (139.178.89.65:43138). Dec 12 18:14:34.831176 sshd[2034]: Accepted publickey for core from 139.178.89.65 port 43138 ssh2: RSA SHA256:P03RT4f8kIelvaSNmaAKLm8NQmzoiYdI93bUyqHqPCw Dec 12 18:14:34.833360 sshd-session[2034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:14:34.840858 systemd-logind[1824]: New session 4 of user core. Dec 12 18:14:34.857588 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 18:14:35.306965 sshd[2037]: Connection closed by 139.178.89.65 port 43138 Dec 12 18:14:35.307247 sshd-session[2034]: pam_unix(sshd:session): session closed for user core Dec 12 18:14:35.312030 systemd[1]: sshd@3-10.0.8.19:22-139.178.89.65:43138.service: Deactivated successfully. Dec 12 18:14:35.313510 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 18:14:35.314225 systemd-logind[1824]: Session 4 logged out. Waiting for processes to exit. Dec 12 18:14:35.315166 systemd-logind[1824]: Removed session 4. Dec 12 18:14:35.477705 systemd[1]: Started sshd@4-10.0.8.19:22-139.178.89.65:43142.service - OpenSSH per-connection server daemon (139.178.89.65:43142). Dec 12 18:14:36.296048 sshd[2043]: Accepted publickey for core from 139.178.89.65 port 43142 ssh2: RSA SHA256:P03RT4f8kIelvaSNmaAKLm8NQmzoiYdI93bUyqHqPCw Dec 12 18:14:36.297700 sshd-session[2043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:14:36.302021 systemd-logind[1824]: New session 5 of user core. Dec 12 18:14:36.313522 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 18:14:36.768142 sshd[2046]: Connection closed by 139.178.89.65 port 43142 Dec 12 18:14:36.768575 sshd-session[2043]: pam_unix(sshd:session): session closed for user core Dec 12 18:14:36.771594 systemd[1]: sshd@4-10.0.8.19:22-139.178.89.65:43142.service: Deactivated successfully. Dec 12 18:14:36.773355 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 18:14:36.775215 systemd-logind[1824]: Session 5 logged out. Waiting for processes to exit. Dec 12 18:14:36.775949 systemd-logind[1824]: Removed session 5. Dec 12 18:14:36.947415 systemd[1]: Started sshd@5-10.0.8.19:22-139.178.89.65:43146.service - OpenSSH per-connection server daemon (139.178.89.65:43146). Dec 12 18:14:37.765786 sshd[2052]: Accepted publickey for core from 139.178.89.65 port 43146 ssh2: RSA SHA256:P03RT4f8kIelvaSNmaAKLm8NQmzoiYdI93bUyqHqPCw Dec 12 18:14:37.767097 sshd-session[2052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:14:37.775839 systemd-logind[1824]: New session 6 of user core. Dec 12 18:14:37.793477 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 18:14:38.240172 sshd[2055]: Connection closed by 139.178.89.65 port 43146 Dec 12 18:14:38.239990 sshd-session[2052]: pam_unix(sshd:session): session closed for user core Dec 12 18:14:38.243418 systemd[1]: sshd@5-10.0.8.19:22-139.178.89.65:43146.service: Deactivated successfully. Dec 12 18:14:38.245075 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 18:14:38.246327 systemd-logind[1824]: Session 6 logged out. Waiting for processes to exit. Dec 12 18:14:38.247290 systemd-logind[1824]: Removed session 6. Dec 12 18:14:38.409056 systemd[1]: Started sshd@6-10.0.8.19:22-139.178.89.65:43160.service - OpenSSH per-connection server daemon (139.178.89.65:43160). Dec 12 18:14:39.215223 sshd[2061]: Accepted publickey for core from 139.178.89.65 port 43160 ssh2: RSA SHA256:P03RT4f8kIelvaSNmaAKLm8NQmzoiYdI93bUyqHqPCw Dec 12 18:14:39.216400 sshd-session[2061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:14:39.221345 systemd-logind[1824]: New session 7 of user core. Dec 12 18:14:39.238493 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 18:14:39.560164 sudo[2065]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 18:14:39.560494 sudo[2065]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:14:39.582392 sudo[2065]: pam_unix(sudo:session): session closed for user root Dec 12 18:14:39.737003 sshd[2064]: Connection closed by 139.178.89.65 port 43160 Dec 12 18:14:39.737505 sshd-session[2061]: pam_unix(sshd:session): session closed for user core Dec 12 18:14:39.740851 systemd[1]: sshd@6-10.0.8.19:22-139.178.89.65:43160.service: Deactivated successfully. Dec 12 18:14:39.742530 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 18:14:39.744211 systemd-logind[1824]: Session 7 logged out. Waiting for processes to exit. Dec 12 18:14:39.744914 systemd-logind[1824]: Removed session 7. Dec 12 18:14:39.907763 systemd[1]: Started sshd@7-10.0.8.19:22-139.178.89.65:37168.service - OpenSSH per-connection server daemon (139.178.89.65:37168). Dec 12 18:14:40.733576 sshd[2071]: Accepted publickey for core from 139.178.89.65 port 37168 ssh2: RSA SHA256:P03RT4f8kIelvaSNmaAKLm8NQmzoiYdI93bUyqHqPCw Dec 12 18:14:40.734832 sshd-session[2071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:14:40.740023 systemd-logind[1824]: New session 8 of user core. Dec 12 18:14:40.753516 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 18:14:41.057177 sudo[2076]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 18:14:41.057432 sudo[2076]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:14:41.062896 sudo[2076]: pam_unix(sudo:session): session closed for user root Dec 12 18:14:41.069969 sudo[2075]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 18:14:41.070255 sudo[2075]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:14:41.081595 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:14:41.132000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 18:14:41.133684 kernel: kauditd_printk_skb: 186 callbacks suppressed Dec 12 18:14:41.133742 kernel: audit: type=1305 audit(1765563281.132:230): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 18:14:41.133847 augenrules[2098]: No rules Dec 12 18:14:41.132000 audit[2098]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe5dcb7580 a2=420 a3=0 items=0 ppid=2079 pid=2098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:41.136211 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:14:41.136508 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:14:41.137413 sudo[2075]: pam_unix(sudo:session): session closed for user root Dec 12 18:14:41.137850 kernel: audit: type=1300 audit(1765563281.132:230): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe5dcb7580 a2=420 a3=0 items=0 ppid=2079 pid=2098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:41.132000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 18:14:41.139574 kernel: audit: type=1327 audit(1765563281.132:230): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 18:14:41.139629 kernel: audit: type=1130 audit(1765563281.136:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:41.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:41.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:41.144339 kernel: audit: type=1131 audit(1765563281.136:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:41.144413 kernel: audit: type=1106 audit(1765563281.137:233): pid=2075 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:14:41.137000 audit[2075]: USER_END pid=2075 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:14:41.147283 kernel: audit: type=1104 audit(1765563281.137:234): pid=2075 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:14:41.137000 audit[2075]: CRED_DISP pid=2075 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:14:41.294307 sshd[2074]: Connection closed by 139.178.89.65 port 37168 Dec 12 18:14:41.294632 sshd-session[2071]: pam_unix(sshd:session): session closed for user core Dec 12 18:14:41.295000 audit[2071]: USER_END pid=2071 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:14:41.297894 systemd[1]: sshd@7-10.0.8.19:22-139.178.89.65:37168.service: Deactivated successfully. Dec 12 18:14:41.296000 audit[2071]: CRED_DISP pid=2071 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:14:41.299583 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 18:14:41.301232 systemd-logind[1824]: Session 8 logged out. Waiting for processes to exit. Dec 12 18:14:41.302035 kernel: audit: type=1106 audit(1765563281.295:235): pid=2071 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:14:41.302083 kernel: audit: type=1104 audit(1765563281.296:236): pid=2071 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:14:41.302105 kernel: audit: type=1131 audit(1765563281.298:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.8.19:22-139.178.89.65:37168 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:41.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.8.19:22-139.178.89.65:37168 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:41.302114 systemd-logind[1824]: Removed session 8. Dec 12 18:14:41.473719 systemd[1]: Started sshd@8-10.0.8.19:22-139.178.89.65:37172.service - OpenSSH per-connection server daemon (139.178.89.65:37172). Dec 12 18:14:41.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.8.19:22-139.178.89.65:37172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:42.323839 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 18:14:42.325897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:14:42.330000 audit[2107]: USER_ACCT pid=2107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:14:42.331384 sshd[2107]: Accepted publickey for core from 139.178.89.65 port 37172 ssh2: RSA SHA256:P03RT4f8kIelvaSNmaAKLm8NQmzoiYdI93bUyqHqPCw Dec 12 18:14:42.332000 audit[2107]: CRED_ACQ pid=2107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:14:42.332000 audit[2107]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd02cbe790 a2=3 a3=0 items=0 ppid=1 pid=2107 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:42.332000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:14:42.332771 sshd-session[2107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:14:42.337369 systemd-logind[1824]: New session 9 of user core. Dec 12 18:14:42.338941 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 18:14:42.341000 audit[2107]: USER_START pid=2107 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:14:42.343000 audit[2113]: CRED_ACQ pid=2113 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:14:42.461152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:14:42.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:42.466556 (kubelet)[2119]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:14:42.514869 kubelet[2119]: E1212 18:14:42.514795 2119 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:14:42.517313 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:14:42.517471 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:14:42.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:14:42.517906 systemd[1]: kubelet.service: Consumed 172ms CPU time, 113.1M memory peak. Dec 12 18:14:42.651753 sudo[2131]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 18:14:42.651000 audit[2131]: USER_ACCT pid=2131 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:14:42.651000 audit[2131]: CRED_REFR pid=2131 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:14:42.652016 sudo[2131]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:14:42.653000 audit[2131]: USER_START pid=2131 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:14:42.970074 chronyd[1799]: Selected source PHC0 Dec 12 18:14:42.970101 chronyd[1799]: System clock wrong by 1.796476 seconds Dec 12 18:14:44.766769 systemd-resolved[1495]: Clock change detected. Flushing caches. Dec 12 18:14:44.766599 chronyd[1799]: System clock was stepped by 1.796476 seconds Dec 12 18:14:44.869620 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 18:14:44.883070 (dockerd)[2157]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 18:14:45.202930 dockerd[2157]: time="2025-12-12T18:14:45.202873350Z" level=info msg="Starting up" Dec 12 18:14:45.203854 dockerd[2157]: time="2025-12-12T18:14:45.203764025Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 18:14:45.216475 dockerd[2157]: time="2025-12-12T18:14:45.216393250Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 18:14:45.267091 dockerd[2157]: time="2025-12-12T18:14:45.267017670Z" level=info msg="Loading containers: start." Dec 12 18:14:45.279684 kernel: Initializing XFRM netlink socket Dec 12 18:14:45.354000 audit[2211]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2211 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:14:45.354000 audit[2211]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffde9d61ae0 a2=0 a3=0 items=0 ppid=2157 pid=2211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.354000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 18:14:45.356000 audit[2213]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2213 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:14:45.356000 audit[2213]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff94abec80 a2=0 a3=0 items=0 ppid=2157 pid=2213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.356000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 18:14:45.358000 audit[2215]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2215 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:14:45.358000 audit[2215]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4010d0b0 a2=0 a3=0 items=0 ppid=2157 pid=2215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.358000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 18:14:45.360000 audit[2217]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2217 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:14:45.360000 audit[2217]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffccb9bad40 a2=0 a3=0 items=0 ppid=2157 pid=2217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.360000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 18:14:45.362000 audit[2219]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2219 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:14:45.362000 audit[2219]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe1bc742f0 a2=0 a3=0 items=0 ppid=2157 pid=2219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.362000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 18:14:45.364000 audit[2221]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2221 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:14:45.364000 audit[2221]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcd5e3c180 a2=0 a3=0 items=0 ppid=2157 pid=2221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.364000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 18:14:45.366000 audit[2223]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2223 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:14:45.366000 audit[2223]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff97bea330 a2=0 a3=0 items=0 ppid=2157 pid=2223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.366000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 18:14:45.368000 audit[2225]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2225 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:14:45.368000 audit[2225]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffed005c2d0 a2=0 a3=0 items=0 ppid=2157 pid=2225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.368000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 18:14:45.408000 audit[2228]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2228 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:14:45.408000 audit[2228]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffe417d03c0 a2=0 a3=0 items=0 ppid=2157 pid=2228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.408000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 12 18:14:45.410000 audit[2230]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2230 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:14:45.410000 audit[2230]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffb2df7aa0 a2=0 a3=0 items=0 ppid=2157 pid=2230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.410000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 18:14:45.412000 audit[2232]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2232 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:14:45.412000 audit[2232]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc1d027570 a2=0 a3=0 items=0 ppid=2157 pid=2232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.412000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 18:14:45.415000 audit[2234]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2234 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:14:45.415000 audit[2234]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffb50c1d50 a2=0 a3=0 items=0 ppid=2157 pid=2234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.415000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 18:14:45.417000 audit[2236]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2236 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:14:45.417000 audit[2236]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffcbde31800 a2=0 a3=0 items=0 ppid=2157 pid=2236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.417000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 18:14:45.463000 audit[2266]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2266 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:14:45.463000 audit[2266]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff65eb1fb0 a2=0 a3=0 items=0 ppid=2157 pid=2266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.463000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 18:14:45.465000 audit[2268]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2268 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:14:45.465000 audit[2268]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd246ebfa0 a2=0 a3=0 items=0 ppid=2157 pid=2268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.465000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 18:14:45.467000 audit[2270]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2270 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:14:45.467000 audit[2270]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4e06a440 a2=0 a3=0 items=0 ppid=2157 pid=2270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.467000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 18:14:45.469000 audit[2272]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2272 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:14:45.469000 audit[2272]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc3e3ff70 a2=0 a3=0 items=0 ppid=2157 pid=2272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.469000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 18:14:45.472000 audit[2274]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2274 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:14:45.472000 audit[2274]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdefe020b0 a2=0 a3=0 items=0 ppid=2157 pid=2274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.472000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 18:14:45.473000 audit[2276]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2276 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:14:45.473000 audit[2276]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc22e725a0 a2=0 a3=0 items=0 ppid=2157 pid=2276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.473000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 18:14:45.475000 audit[2278]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2278 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:14:45.475000 audit[2278]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff0873e9c0 a2=0 a3=0 items=0 ppid=2157 pid=2278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.475000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 18:14:45.477000 audit[2280]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2280 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:14:45.477000 audit[2280]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff89b0d890 a2=0 a3=0 items=0 ppid=2157 pid=2280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.477000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 18:14:45.480000 audit[2282]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2282 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:14:45.480000 audit[2282]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffdf3bb3b10 a2=0 a3=0 items=0 ppid=2157 pid=2282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.480000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 12 18:14:45.482000 audit[2284]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2284 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:14:45.482000 audit[2284]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe8c9046e0 a2=0 a3=0 items=0 ppid=2157 pid=2284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.482000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 18:14:45.484000 audit[2286]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2286 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:14:45.484000 audit[2286]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd75067f80 a2=0 a3=0 items=0 ppid=2157 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.484000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 18:14:45.486000 audit[2288]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2288 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:14:45.486000 audit[2288]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffb1581640 a2=0 a3=0 items=0 ppid=2157 pid=2288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.486000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 18:14:45.488000 audit[2290]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2290 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:14:45.488000 audit[2290]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc22685230 a2=0 a3=0 items=0 ppid=2157 pid=2290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.488000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 18:14:45.495000 audit[2295]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2295 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:14:45.495000 audit[2295]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc8c9d4900 a2=0 a3=0 items=0 ppid=2157 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.495000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 18:14:45.497000 audit[2297]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2297 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:14:45.497000 audit[2297]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd2509b8b0 a2=0 a3=0 items=0 ppid=2157 pid=2297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.497000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 18:14:45.499000 audit[2299]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2299 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:14:45.499000 audit[2299]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdcc8b9730 a2=0 a3=0 items=0 ppid=2157 pid=2299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.499000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 18:14:45.502000 audit[2301]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2301 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:14:45.502000 audit[2301]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc02349d40 a2=0 a3=0 items=0 ppid=2157 pid=2301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.502000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 18:14:45.504000 audit[2303]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2303 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:14:45.504000 audit[2303]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffcf0eae30 a2=0 a3=0 items=0 ppid=2157 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.504000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 18:14:45.506000 audit[2305]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2305 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:14:45.506000 audit[2305]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcd5208770 a2=0 a3=0 items=0 ppid=2157 pid=2305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.506000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 18:14:45.527000 audit[2310]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2310 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:14:45.527000 audit[2310]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffce5651ef0 a2=0 a3=0 items=0 ppid=2157 pid=2310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.527000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 12 18:14:45.529000 audit[2312]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2312 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:14:45.529000 audit[2312]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fffadadc0d0 a2=0 a3=0 items=0 ppid=2157 pid=2312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.529000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 12 18:14:45.538000 audit[2320]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:14:45.538000 audit[2320]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffcffc335c0 a2=0 a3=0 items=0 ppid=2157 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.538000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 12 18:14:45.550000 audit[2326]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:14:45.550000 audit[2326]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffebd4cd460 a2=0 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.550000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 12 18:14:45.552000 audit[2328]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2328 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:14:45.552000 audit[2328]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffd6f825810 a2=0 a3=0 items=0 ppid=2157 pid=2328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.552000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 12 18:14:45.555000 audit[2330]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2330 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:14:45.555000 audit[2330]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc721f46a0 a2=0 a3=0 items=0 ppid=2157 pid=2330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.555000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 12 18:14:45.557000 audit[2332]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2332 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:14:45.557000 audit[2332]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc70497b00 a2=0 a3=0 items=0 ppid=2157 pid=2332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.557000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 18:14:45.559000 audit[2334]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2334 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:14:45.559000 audit[2334]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd0a74ce70 a2=0 a3=0 items=0 ppid=2157 pid=2334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:45.559000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 12 18:14:45.560404 systemd-networkd[1745]: docker0: Link UP Dec 12 18:14:45.578370 dockerd[2157]: time="2025-12-12T18:14:45.578301371Z" level=info msg="Loading containers: done." Dec 12 18:14:45.602577 dockerd[2157]: time="2025-12-12T18:14:45.602490183Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 18:14:45.602781 dockerd[2157]: time="2025-12-12T18:14:45.602603515Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 18:14:45.602781 dockerd[2157]: time="2025-12-12T18:14:45.602708579Z" level=info msg="Initializing buildkit" Dec 12 18:14:45.635433 dockerd[2157]: time="2025-12-12T18:14:45.635348399Z" level=info msg="Completed buildkit initialization" Dec 12 18:14:45.641086 dockerd[2157]: time="2025-12-12T18:14:45.641037577Z" level=info msg="Daemon has completed initialization" Dec 12 18:14:45.641243 dockerd[2157]: time="2025-12-12T18:14:45.641180161Z" level=info msg="API listen on /run/docker.sock" Dec 12 18:14:45.641412 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 18:14:45.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:47.025564 containerd[1848]: time="2025-12-12T18:14:47.025495609Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 12 18:14:47.815276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1580690938.mount: Deactivated successfully. Dec 12 18:14:48.616954 containerd[1848]: time="2025-12-12T18:14:48.616862467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:14:48.618196 containerd[1848]: time="2025-12-12T18:14:48.618145124Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28445968" Dec 12 18:14:48.620509 containerd[1848]: time="2025-12-12T18:14:48.620454742Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:14:48.623831 containerd[1848]: time="2025-12-12T18:14:48.623793829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:14:48.624601 containerd[1848]: time="2025-12-12T18:14:48.624572661Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.599026841s" Dec 12 18:14:48.624643 containerd[1848]: time="2025-12-12T18:14:48.624603431Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 12 18:14:48.625176 containerd[1848]: time="2025-12-12T18:14:48.625146912Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 12 18:14:49.947744 containerd[1848]: time="2025-12-12T18:14:49.947682568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:14:49.949145 containerd[1848]: time="2025-12-12T18:14:49.949069952Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Dec 12 18:14:49.950722 containerd[1848]: time="2025-12-12T18:14:49.950653215Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:14:49.954687 containerd[1848]: time="2025-12-12T18:14:49.954626885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:14:49.955355 containerd[1848]: time="2025-12-12T18:14:49.955272359Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.33009343s" Dec 12 18:14:49.955355 containerd[1848]: time="2025-12-12T18:14:49.955331384Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 12 18:14:49.963361 containerd[1848]: time="2025-12-12T18:14:49.963312391Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 12 18:14:51.047701 containerd[1848]: time="2025-12-12T18:14:51.047630744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:14:51.049620 containerd[1848]: time="2025-12-12T18:14:51.049576813Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Dec 12 18:14:51.052069 containerd[1848]: time="2025-12-12T18:14:51.052031842Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:14:51.056581 containerd[1848]: time="2025-12-12T18:14:51.056251780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:14:51.057173 containerd[1848]: time="2025-12-12T18:14:51.057147239Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.093794623s" Dec 12 18:14:51.057223 containerd[1848]: time="2025-12-12T18:14:51.057179490Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 12 18:14:51.064725 containerd[1848]: time="2025-12-12T18:14:51.064681981Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 12 18:14:52.046102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1717492935.mount: Deactivated successfully. Dec 12 18:14:52.362401 containerd[1848]: time="2025-12-12T18:14:52.362274563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:14:52.363595 containerd[1848]: time="2025-12-12T18:14:52.363560311Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=20340589" Dec 12 18:14:52.365259 containerd[1848]: time="2025-12-12T18:14:52.365234470Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:14:52.367415 containerd[1848]: time="2025-12-12T18:14:52.367396486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:14:52.367968 containerd[1848]: time="2025-12-12T18:14:52.367802171Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.303085992s" Dec 12 18:14:52.367968 containerd[1848]: time="2025-12-12T18:14:52.367825706Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 12 18:14:52.368201 containerd[1848]: time="2025-12-12T18:14:52.368162052Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 12 18:14:52.983783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3830981414.mount: Deactivated successfully. Dec 12 18:14:53.611715 containerd[1848]: time="2025-12-12T18:14:53.611556143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:14:53.615859 containerd[1848]: time="2025-12-12T18:14:53.615820819Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=0" Dec 12 18:14:53.629630 containerd[1848]: time="2025-12-12T18:14:53.629559410Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:14:53.633876 containerd[1848]: time="2025-12-12T18:14:53.633804156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:14:53.634759 containerd[1848]: time="2025-12-12T18:14:53.634687330Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.266492055s" Dec 12 18:14:53.634759 containerd[1848]: time="2025-12-12T18:14:53.634737303Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 12 18:14:53.636107 containerd[1848]: time="2025-12-12T18:14:53.636067377Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 18:14:54.244809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2408373056.mount: Deactivated successfully. Dec 12 18:14:54.254108 containerd[1848]: time="2025-12-12T18:14:54.254041378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:14:54.257053 containerd[1848]: time="2025-12-12T18:14:54.257008690Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 18:14:54.259371 containerd[1848]: time="2025-12-12T18:14:54.259295188Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:14:54.261616 containerd[1848]: time="2025-12-12T18:14:54.261564388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:14:54.262317 containerd[1848]: time="2025-12-12T18:14:54.262114913Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 626.019615ms" Dec 12 18:14:54.262317 containerd[1848]: time="2025-12-12T18:14:54.262144456Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 12 18:14:54.262530 containerd[1848]: time="2025-12-12T18:14:54.262516784Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 12 18:14:54.369950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 12 18:14:54.371436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:14:54.523922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:14:54.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:54.524854 kernel: kauditd_printk_skb: 134 callbacks suppressed Dec 12 18:14:54.524902 kernel: audit: type=1130 audit(1765563294.523:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:14:54.529645 (kubelet)[2519]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:14:54.577772 kubelet[2519]: E1212 18:14:54.577685 2519 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:14:54.579909 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:14:54.580042 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:14:54.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:14:54.580473 systemd[1]: kubelet.service: Consumed 169ms CPU time, 111.2M memory peak. Dec 12 18:14:54.588727 kernel: audit: type=1131 audit(1765563294.579:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:14:54.796176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1399303559.mount: Deactivated successfully. Dec 12 18:14:56.130753 containerd[1848]: time="2025-12-12T18:14:56.129683564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:14:56.135509 containerd[1848]: time="2025-12-12T18:14:56.130912468Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=46127678" Dec 12 18:14:56.135509 containerd[1848]: time="2025-12-12T18:14:56.132391618Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:14:56.135509 containerd[1848]: time="2025-12-12T18:14:56.135203395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:14:56.136078 containerd[1848]: time="2025-12-12T18:14:56.136043047Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 1.873460207s" Dec 12 18:14:56.136078 containerd[1848]: time="2025-12-12T18:14:56.136069948Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 12 18:15:00.274467 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:15:00.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:00.274639 systemd[1]: kubelet.service: Consumed 169ms CPU time, 111.2M memory peak. Dec 12 18:15:00.276993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:15:00.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:00.279036 kernel: audit: type=1130 audit(1765563300.272:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:00.279089 kernel: audit: type=1131 audit(1765563300.272:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:00.302560 systemd[1]: Reload requested from client PID 2622 ('systemctl') (unit session-9.scope)... Dec 12 18:15:00.302575 systemd[1]: Reloading... Dec 12 18:15:00.362709 zram_generator::config[2669]: No configuration found. Dec 12 18:15:00.550581 systemd[1]: Reloading finished in 247 ms. Dec 12 18:15:00.574000 audit: BPF prog-id=63 op=LOAD Dec 12 18:15:00.577702 kernel: audit: type=1334 audit(1765563300.574:294): prog-id=63 op=LOAD Dec 12 18:15:00.574000 audit: BPF prog-id=60 op=UNLOAD Dec 12 18:15:00.574000 audit: BPF prog-id=64 op=LOAD Dec 12 18:15:00.580329 kernel: audit: type=1334 audit(1765563300.574:295): prog-id=60 op=UNLOAD Dec 12 18:15:00.580363 kernel: audit: type=1334 audit(1765563300.574:296): prog-id=64 op=LOAD Dec 12 18:15:00.580382 kernel: audit: type=1334 audit(1765563300.574:297): prog-id=65 op=LOAD Dec 12 18:15:00.574000 audit: BPF prog-id=65 op=LOAD Dec 12 18:15:00.574000 audit: BPF prog-id=61 op=UNLOAD Dec 12 18:15:00.582013 kernel: audit: type=1334 audit(1765563300.574:298): prog-id=61 op=UNLOAD Dec 12 18:15:00.574000 audit: BPF prog-id=62 op=UNLOAD Dec 12 18:15:00.583850 kernel: audit: type=1334 audit(1765563300.574:299): prog-id=62 op=UNLOAD Dec 12 18:15:00.583887 kernel: audit: type=1334 audit(1765563300.576:300): prog-id=66 op=LOAD Dec 12 18:15:00.576000 audit: BPF prog-id=66 op=LOAD Dec 12 18:15:00.576000 audit: BPF prog-id=59 op=UNLOAD Dec 12 18:15:00.586089 kernel: audit: type=1334 audit(1765563300.576:301): prog-id=59 op=UNLOAD Dec 12 18:15:00.576000 audit: BPF prog-id=67 op=LOAD Dec 12 18:15:00.576000 audit: BPF prog-id=68 op=LOAD Dec 12 18:15:00.576000 audit: BPF prog-id=56 op=UNLOAD Dec 12 18:15:00.576000 audit: BPF prog-id=57 op=UNLOAD Dec 12 18:15:00.576000 audit: BPF prog-id=69 op=LOAD Dec 12 18:15:00.576000 audit: BPF prog-id=50 op=UNLOAD Dec 12 18:15:00.576000 audit: BPF prog-id=70 op=LOAD Dec 12 18:15:00.576000 audit: BPF prog-id=71 op=LOAD Dec 12 18:15:00.576000 audit: BPF prog-id=51 op=UNLOAD Dec 12 18:15:00.576000 audit: BPF prog-id=52 op=UNLOAD Dec 12 18:15:00.577000 audit: BPF prog-id=72 op=LOAD Dec 12 18:15:00.577000 audit: BPF prog-id=47 op=UNLOAD Dec 12 18:15:00.577000 audit: BPF prog-id=73 op=LOAD Dec 12 18:15:00.577000 audit: BPF prog-id=74 op=LOAD Dec 12 18:15:00.577000 audit: BPF prog-id=48 op=UNLOAD Dec 12 18:15:00.577000 audit: BPF prog-id=49 op=UNLOAD Dec 12 18:15:00.577000 audit: BPF prog-id=75 op=LOAD Dec 12 18:15:00.577000 audit: BPF prog-id=43 op=UNLOAD Dec 12 18:15:00.577000 audit: BPF prog-id=76 op=LOAD Dec 12 18:15:00.577000 audit: BPF prog-id=77 op=LOAD Dec 12 18:15:00.577000 audit: BPF prog-id=44 op=UNLOAD Dec 12 18:15:00.577000 audit: BPF prog-id=45 op=UNLOAD Dec 12 18:15:00.600000 audit: BPF prog-id=78 op=LOAD Dec 12 18:15:00.600000 audit: BPF prog-id=53 op=UNLOAD Dec 12 18:15:00.600000 audit: BPF prog-id=79 op=LOAD Dec 12 18:15:00.600000 audit: BPF prog-id=80 op=LOAD Dec 12 18:15:00.600000 audit: BPF prog-id=54 op=UNLOAD Dec 12 18:15:00.600000 audit: BPF prog-id=55 op=UNLOAD Dec 12 18:15:00.600000 audit: BPF prog-id=81 op=LOAD Dec 12 18:15:00.600000 audit: BPF prog-id=58 op=UNLOAD Dec 12 18:15:00.601000 audit: BPF prog-id=82 op=LOAD Dec 12 18:15:00.601000 audit: BPF prog-id=46 op=UNLOAD Dec 12 18:15:00.630326 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 18:15:00.630416 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 18:15:00.630752 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:15:00.630810 systemd[1]: kubelet.service: Consumed 91ms CPU time, 98.5M memory peak. Dec 12 18:15:00.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:15:00.632344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:15:00.753950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:15:00.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:00.757873 (kubelet)[2723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:15:00.794020 kubelet[2723]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:15:00.794020 kubelet[2723]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:15:00.794020 kubelet[2723]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:15:00.794386 kubelet[2723]: I1212 18:15:00.794069 2723 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:15:01.335516 kubelet[2723]: I1212 18:15:01.335467 2723 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 18:15:01.335516 kubelet[2723]: I1212 18:15:01.335498 2723 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:15:01.335730 kubelet[2723]: I1212 18:15:01.335710 2723 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:15:01.383525 kubelet[2723]: I1212 18:15:01.383437 2723 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:15:01.386437 kubelet[2723]: E1212 18:15:01.385527 2723 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.8.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.8.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 18:15:01.395695 kubelet[2723]: I1212 18:15:01.395150 2723 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:15:01.402504 kubelet[2723]: I1212 18:15:01.402463 2723 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:15:01.402716 kubelet[2723]: I1212 18:15:01.402686 2723 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:15:01.402876 kubelet[2723]: I1212 18:15:01.402710 2723 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515-1-0-e-14f87f00b0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:15:01.402975 kubelet[2723]: I1212 18:15:01.402880 2723 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:15:01.402975 kubelet[2723]: I1212 18:15:01.402889 2723 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 18:15:01.404390 kubelet[2723]: I1212 18:15:01.404349 2723 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:15:01.411192 kubelet[2723]: I1212 18:15:01.411161 2723 kubelet.go:480] "Attempting to sync node with API server" Dec 12 18:15:01.411192 kubelet[2723]: I1212 18:15:01.411193 2723 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:15:01.414925 kubelet[2723]: I1212 18:15:01.414854 2723 kubelet.go:386] "Adding apiserver pod source" Dec 12 18:15:01.414925 kubelet[2723]: I1212 18:15:01.414880 2723 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:15:01.417729 kubelet[2723]: E1212 18:15:01.417681 2723 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.8.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515-1-0-e-14f87f00b0&limit=500&resourceVersion=0\": dial tcp 10.0.8.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 18:15:01.420137 kubelet[2723]: E1212 18:15:01.420086 2723 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.8.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.8.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 18:15:01.421128 kubelet[2723]: I1212 18:15:01.421106 2723 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 18:15:01.421621 kubelet[2723]: I1212 18:15:01.421601 2723 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:15:01.422988 kubelet[2723]: W1212 18:15:01.422952 2723 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 18:15:01.424989 kubelet[2723]: I1212 18:15:01.424957 2723 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:15:01.425052 kubelet[2723]: I1212 18:15:01.425013 2723 server.go:1289] "Started kubelet" Dec 12 18:15:01.425354 kubelet[2723]: I1212 18:15:01.425216 2723 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:15:01.429027 kubelet[2723]: I1212 18:15:01.428991 2723 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:15:01.429338 kubelet[2723]: I1212 18:15:01.429301 2723 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:15:01.431519 kubelet[2723]: I1212 18:15:01.431488 2723 server.go:317] "Adding debug handlers to kubelet server" Dec 12 18:15:01.431650 kubelet[2723]: I1212 18:15:01.431495 2723 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:15:01.431692 kubelet[2723]: I1212 18:15:01.431646 2723 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:15:01.432343 kubelet[2723]: I1212 18:15:01.432321 2723 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:15:01.432449 kubelet[2723]: I1212 18:15:01.432420 2723 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:15:01.432480 kubelet[2723]: E1212 18:15:01.432452 2723 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515-1-0-e-14f87f00b0\" not found" Dec 12 18:15:01.432506 kubelet[2723]: I1212 18:15:01.432483 2723 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:15:01.433445 kubelet[2723]: E1212 18:15:01.433028 2723 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.8.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-e-14f87f00b0?timeout=10s\": dial tcp 10.0.8.19:6443: connect: connection refused" interval="200ms" Dec 12 18:15:01.433445 kubelet[2723]: E1212 18:15:01.433131 2723 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.8.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.8.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 18:15:01.433445 kubelet[2723]: E1212 18:15:01.433413 2723 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:15:01.434204 kubelet[2723]: I1212 18:15:01.434181 2723 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:15:01.434204 kubelet[2723]: I1212 18:15:01.434202 2723 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:15:01.434279 kubelet[2723]: I1212 18:15:01.434268 2723 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:15:01.436000 audit[2747]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2747 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:01.436000 audit[2747]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe00aa4f60 a2=0 a3=0 items=0 ppid=2723 pid=2747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:01.436000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 18:15:01.439465 kubelet[2723]: E1212 18:15:01.437600 2723 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.8.19:6443/api/v1/namespaces/default/events\": dial tcp 10.0.8.19:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4515-1-0-e-14f87f00b0.18808a77e1e6cf34 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515-1-0-e-14f87f00b0,UID:ci-4515-1-0-e-14f87f00b0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515-1-0-e-14f87f00b0,},FirstTimestamp:2025-12-12 18:15:01.424975668 +0000 UTC m=+0.663686947,LastTimestamp:2025-12-12 18:15:01.424975668 +0000 UTC m=+0.663686947,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-e-14f87f00b0,}" Dec 12 18:15:01.437000 audit[2748]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2748 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:01.437000 audit[2748]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeae52f080 a2=0 a3=0 items=0 ppid=2723 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:01.437000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 18:15:01.439000 audit[2750]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2750 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:01.439000 audit[2750]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe237400e0 a2=0 a3=0 items=0 ppid=2723 pid=2750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:01.439000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 18:15:01.441000 audit[2752]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2752 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:01.441000 audit[2752]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd9d117fa0 a2=0 a3=0 items=0 ppid=2723 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:01.441000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 18:15:01.446707 kubelet[2723]: I1212 18:15:01.446686 2723 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:15:01.446707 kubelet[2723]: I1212 18:15:01.446701 2723 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:15:01.446810 kubelet[2723]: I1212 18:15:01.446716 2723 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:15:01.451353 kubelet[2723]: I1212 18:15:01.451147 2723 policy_none.go:49] "None policy: Start" Dec 12 18:15:01.451353 kubelet[2723]: I1212 18:15:01.451178 2723 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:15:01.451353 kubelet[2723]: I1212 18:15:01.451189 2723 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:15:01.451000 audit[2755]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2755 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:01.451000 audit[2755]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc89ec2610 a2=0 a3=0 items=0 ppid=2723 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:01.451000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 12 18:15:01.453602 kubelet[2723]: I1212 18:15:01.453323 2723 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 18:15:01.452000 audit[2756]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2756 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:01.452000 audit[2756]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff08ced850 a2=0 a3=0 items=0 ppid=2723 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:01.452000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 18:15:01.452000 audit[2757]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2757 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:01.452000 audit[2757]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffca611720 a2=0 a3=0 items=0 ppid=2723 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:01.452000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 18:15:01.454619 kubelet[2723]: I1212 18:15:01.454538 2723 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 18:15:01.454619 kubelet[2723]: I1212 18:15:01.454583 2723 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 18:15:01.454619 kubelet[2723]: I1212 18:15:01.454616 2723 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:15:01.454703 kubelet[2723]: I1212 18:15:01.454625 2723 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 18:15:01.454772 kubelet[2723]: E1212 18:15:01.454721 2723 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:15:01.455125 kubelet[2723]: E1212 18:15:01.455096 2723 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.8.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.8.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 18:15:01.453000 audit[2759]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2759 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:01.453000 audit[2759]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdfcfce310 a2=0 a3=0 items=0 ppid=2723 pid=2759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:01.453000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 18:15:01.454000 audit[2760]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2760 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:01.454000 audit[2760]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8d0524d0 a2=0 a3=0 items=0 ppid=2723 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:01.454000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 18:15:01.455000 audit[2761]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2761 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:01.455000 audit[2761]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfc943fe0 a2=0 a3=0 items=0 ppid=2723 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:01.455000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 18:15:01.455000 audit[2762]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2762 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:01.455000 audit[2762]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffd16c7760 a2=0 a3=0 items=0 ppid=2723 pid=2762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:01.455000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 18:15:01.457000 audit[2763]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2763 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:01.457000 audit[2763]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0883be20 a2=0 a3=0 items=0 ppid=2723 pid=2763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:01.457000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 18:15:01.459521 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 18:15:01.472176 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 18:15:01.493384 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 18:15:01.494639 kubelet[2723]: E1212 18:15:01.494513 2723 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:15:01.494759 kubelet[2723]: I1212 18:15:01.494738 2723 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:15:01.494892 kubelet[2723]: I1212 18:15:01.494763 2723 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:15:01.494969 kubelet[2723]: I1212 18:15:01.494956 2723 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:15:01.496231 kubelet[2723]: E1212 18:15:01.496202 2723 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:15:01.496231 kubelet[2723]: E1212 18:15:01.496233 2723 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4515-1-0-e-14f87f00b0\" not found" Dec 12 18:15:01.568962 systemd[1]: Created slice kubepods-burstable-podb94aa12fd8673d6357477d6df957aaf2.slice - libcontainer container kubepods-burstable-podb94aa12fd8673d6357477d6df957aaf2.slice. Dec 12 18:15:01.594055 kubelet[2723]: E1212 18:15:01.593932 2723 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-e-14f87f00b0\" not found" node="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:01.597579 kubelet[2723]: I1212 18:15:01.597323 2723 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:01.597397 systemd[1]: Created slice kubepods-burstable-pod1e6dca2771c2ad4f206c0da74a69f151.slice - libcontainer container kubepods-burstable-pod1e6dca2771c2ad4f206c0da74a69f151.slice. Dec 12 18:15:01.597984 kubelet[2723]: E1212 18:15:01.597726 2723 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.8.19:6443/api/v1/nodes\": dial tcp 10.0.8.19:6443: connect: connection refused" node="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:01.599703 kubelet[2723]: E1212 18:15:01.599673 2723 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-e-14f87f00b0\" not found" node="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:01.601107 systemd[1]: Created slice kubepods-burstable-podcb9274a72e8e0806c6af77c8397b1934.slice - libcontainer container kubepods-burstable-podcb9274a72e8e0806c6af77c8397b1934.slice. Dec 12 18:15:01.602499 kubelet[2723]: E1212 18:15:01.602470 2723 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-e-14f87f00b0\" not found" node="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:01.633904 kubelet[2723]: I1212 18:15:01.633819 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b94aa12fd8673d6357477d6df957aaf2-ca-certs\") pod \"kube-apiserver-ci-4515-1-0-e-14f87f00b0\" (UID: \"b94aa12fd8673d6357477d6df957aaf2\") " pod="kube-system/kube-apiserver-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:01.633904 kubelet[2723]: I1212 18:15:01.633887 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b94aa12fd8673d6357477d6df957aaf2-k8s-certs\") pod \"kube-apiserver-ci-4515-1-0-e-14f87f00b0\" (UID: \"b94aa12fd8673d6357477d6df957aaf2\") " pod="kube-system/kube-apiserver-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:01.633904 kubelet[2723]: I1212 18:15:01.633924 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb9274a72e8e0806c6af77c8397b1934-kubeconfig\") pod \"kube-scheduler-ci-4515-1-0-e-14f87f00b0\" (UID: \"cb9274a72e8e0806c6af77c8397b1934\") " pod="kube-system/kube-scheduler-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:01.634136 kubelet[2723]: I1212 18:15:01.633945 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b94aa12fd8673d6357477d6df957aaf2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515-1-0-e-14f87f00b0\" (UID: \"b94aa12fd8673d6357477d6df957aaf2\") " pod="kube-system/kube-apiserver-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:01.634136 kubelet[2723]: I1212 18:15:01.634005 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e6dca2771c2ad4f206c0da74a69f151-ca-certs\") pod \"kube-controller-manager-ci-4515-1-0-e-14f87f00b0\" (UID: \"1e6dca2771c2ad4f206c0da74a69f151\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:01.634136 kubelet[2723]: I1212 18:15:01.634025 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1e6dca2771c2ad4f206c0da74a69f151-flexvolume-dir\") pod \"kube-controller-manager-ci-4515-1-0-e-14f87f00b0\" (UID: \"1e6dca2771c2ad4f206c0da74a69f151\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:01.634136 kubelet[2723]: I1212 18:15:01.634058 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e6dca2771c2ad4f206c0da74a69f151-k8s-certs\") pod \"kube-controller-manager-ci-4515-1-0-e-14f87f00b0\" (UID: \"1e6dca2771c2ad4f206c0da74a69f151\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:01.634136 kubelet[2723]: I1212 18:15:01.634074 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1e6dca2771c2ad4f206c0da74a69f151-kubeconfig\") pod \"kube-controller-manager-ci-4515-1-0-e-14f87f00b0\" (UID: \"1e6dca2771c2ad4f206c0da74a69f151\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:01.634248 kubelet[2723]: I1212 18:15:01.634089 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e6dca2771c2ad4f206c0da74a69f151-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515-1-0-e-14f87f00b0\" (UID: \"1e6dca2771c2ad4f206c0da74a69f151\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:01.634248 kubelet[2723]: E1212 18:15:01.634146 2723 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.8.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-e-14f87f00b0?timeout=10s\": dial tcp 10.0.8.19:6443: connect: connection refused" interval="400ms" Dec 12 18:15:01.799911 kubelet[2723]: I1212 18:15:01.799843 2723 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:01.800272 kubelet[2723]: E1212 18:15:01.800215 2723 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.8.19:6443/api/v1/nodes\": dial tcp 10.0.8.19:6443: connect: connection refused" node="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:01.895561 containerd[1848]: time="2025-12-12T18:15:01.895486106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515-1-0-e-14f87f00b0,Uid:b94aa12fd8673d6357477d6df957aaf2,Namespace:kube-system,Attempt:0,}" Dec 12 18:15:01.901944 containerd[1848]: time="2025-12-12T18:15:01.901888154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515-1-0-e-14f87f00b0,Uid:1e6dca2771c2ad4f206c0da74a69f151,Namespace:kube-system,Attempt:0,}" Dec 12 18:15:01.903385 containerd[1848]: time="2025-12-12T18:15:01.903348049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515-1-0-e-14f87f00b0,Uid:cb9274a72e8e0806c6af77c8397b1934,Namespace:kube-system,Attempt:0,}" Dec 12 18:15:01.935359 containerd[1848]: time="2025-12-12T18:15:01.935190108Z" level=info msg="connecting to shim 0612cbeab0d8328162a256c78b535d4b539d4d6722721ae7e16c11e879923164" address="unix:///run/containerd/s/3884b6b3cef5fffe9423c1832acd01080b0ae55654923ae79a264410b0abc151" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:15:01.964373 containerd[1848]: time="2025-12-12T18:15:01.964281038Z" level=info msg="connecting to shim 97585d37b9464f3bd753eeafab9c79cbfacedb7632826297f9de2cfd02452544" address="unix:///run/containerd/s/6c65586a96ec9353c3e6fa6312cce20dde85b8d75bb9fd0789717e559a1874bf" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:15:01.965064 containerd[1848]: time="2025-12-12T18:15:01.965017671Z" level=info msg="connecting to shim 71112cd8a7cac7c3e43910dbacff6282d2abca1916d96d8118e763342c8109ac" address="unix:///run/containerd/s/778cf22491b9c224589e8c64aaa5bcdd3fa288119afb8fceb4e77eb33d6fb548" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:15:01.966976 systemd[1]: Started cri-containerd-0612cbeab0d8328162a256c78b535d4b539d4d6722721ae7e16c11e879923164.scope - libcontainer container 0612cbeab0d8328162a256c78b535d4b539d4d6722721ae7e16c11e879923164. Dec 12 18:15:01.977000 audit: BPF prog-id=83 op=LOAD Dec 12 18:15:01.977000 audit: BPF prog-id=84 op=LOAD Dec 12 18:15:01.977000 audit[2782]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2772 pid=2782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:01.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036313263626561623064383332383136326132353663373862353335 Dec 12 18:15:01.977000 audit: BPF prog-id=84 op=UNLOAD Dec 12 18:15:01.977000 audit[2782]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2772 pid=2782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:01.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036313263626561623064383332383136326132353663373862353335 Dec 12 18:15:01.977000 audit: BPF prog-id=85 op=LOAD Dec 12 18:15:01.977000 audit[2782]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2772 pid=2782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:01.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036313263626561623064383332383136326132353663373862353335 Dec 12 18:15:01.977000 audit: BPF prog-id=86 op=LOAD Dec 12 18:15:01.977000 audit[2782]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2772 pid=2782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:01.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036313263626561623064383332383136326132353663373862353335 Dec 12 18:15:01.977000 audit: BPF prog-id=86 op=UNLOAD Dec 12 18:15:01.977000 audit[2782]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2772 pid=2782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:01.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036313263626561623064383332383136326132353663373862353335 Dec 12 18:15:01.977000 audit: BPF prog-id=85 op=UNLOAD Dec 12 18:15:01.977000 audit[2782]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2772 pid=2782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:01.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036313263626561623064383332383136326132353663373862353335 Dec 12 18:15:01.977000 audit: BPF prog-id=87 op=LOAD Dec 12 18:15:01.977000 audit[2782]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2772 pid=2782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:01.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036313263626561623064383332383136326132353663373862353335 Dec 12 18:15:01.995942 systemd[1]: Started cri-containerd-71112cd8a7cac7c3e43910dbacff6282d2abca1916d96d8118e763342c8109ac.scope - libcontainer container 71112cd8a7cac7c3e43910dbacff6282d2abca1916d96d8118e763342c8109ac. Dec 12 18:15:01.997119 systemd[1]: Started cri-containerd-97585d37b9464f3bd753eeafab9c79cbfacedb7632826297f9de2cfd02452544.scope - libcontainer container 97585d37b9464f3bd753eeafab9c79cbfacedb7632826297f9de2cfd02452544. Dec 12 18:15:02.006000 audit: BPF prog-id=88 op=LOAD Dec 12 18:15:02.007000 audit: BPF prog-id=89 op=LOAD Dec 12 18:15:02.007000 audit[2839]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2813 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313132636438613763616337633365343339313064626163666636 Dec 12 18:15:02.007000 audit: BPF prog-id=89 op=UNLOAD Dec 12 18:15:02.007000 audit[2839]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2813 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313132636438613763616337633365343339313064626163666636 Dec 12 18:15:02.007000 audit: BPF prog-id=90 op=LOAD Dec 12 18:15:02.007000 audit[2839]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2813 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313132636438613763616337633365343339313064626163666636 Dec 12 18:15:02.007000 audit: BPF prog-id=91 op=LOAD Dec 12 18:15:02.007000 audit[2839]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2813 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313132636438613763616337633365343339313064626163666636 Dec 12 18:15:02.007000 audit: BPF prog-id=91 op=UNLOAD Dec 12 18:15:02.007000 audit[2839]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2813 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313132636438613763616337633365343339313064626163666636 Dec 12 18:15:02.007000 audit: BPF prog-id=90 op=UNLOAD Dec 12 18:15:02.007000 audit[2839]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2813 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313132636438613763616337633365343339313064626163666636 Dec 12 18:15:02.007000 audit: BPF prog-id=92 op=LOAD Dec 12 18:15:02.007000 audit[2839]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2813 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313132636438613763616337633365343339313064626163666636 Dec 12 18:15:02.007000 audit: BPF prog-id=93 op=LOAD Dec 12 18:15:02.008000 audit: BPF prog-id=94 op=LOAD Dec 12 18:15:02.008000 audit[2841]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2810 pid=2841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937353835643337623934363466336264373533656561666162396337 Dec 12 18:15:02.008000 audit: BPF prog-id=94 op=UNLOAD Dec 12 18:15:02.008000 audit[2841]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2810 pid=2841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937353835643337623934363466336264373533656561666162396337 Dec 12 18:15:02.008000 audit: BPF prog-id=95 op=LOAD Dec 12 18:15:02.008000 audit[2841]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2810 pid=2841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937353835643337623934363466336264373533656561666162396337 Dec 12 18:15:02.008000 audit: BPF prog-id=96 op=LOAD Dec 12 18:15:02.008000 audit[2841]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2810 pid=2841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937353835643337623934363466336264373533656561666162396337 Dec 12 18:15:02.008000 audit: BPF prog-id=96 op=UNLOAD Dec 12 18:15:02.008000 audit[2841]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2810 pid=2841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937353835643337623934363466336264373533656561666162396337 Dec 12 18:15:02.008000 audit: BPF prog-id=95 op=UNLOAD Dec 12 18:15:02.008000 audit[2841]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2810 pid=2841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937353835643337623934363466336264373533656561666162396337 Dec 12 18:15:02.008000 audit: BPF prog-id=97 op=LOAD Dec 12 18:15:02.008000 audit[2841]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2810 pid=2841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937353835643337623934363466336264373533656561666162396337 Dec 12 18:15:02.014693 containerd[1848]: time="2025-12-12T18:15:02.014645108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515-1-0-e-14f87f00b0,Uid:b94aa12fd8673d6357477d6df957aaf2,Namespace:kube-system,Attempt:0,} returns sandbox id \"0612cbeab0d8328162a256c78b535d4b539d4d6722721ae7e16c11e879923164\"" Dec 12 18:15:02.027135 containerd[1848]: time="2025-12-12T18:15:02.027097650Z" level=info msg="CreateContainer within sandbox \"0612cbeab0d8328162a256c78b535d4b539d4d6722721ae7e16c11e879923164\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 18:15:02.034945 kubelet[2723]: E1212 18:15:02.034896 2723 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.8.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-e-14f87f00b0?timeout=10s\": dial tcp 10.0.8.19:6443: connect: connection refused" interval="800ms" Dec 12 18:15:02.039958 containerd[1848]: time="2025-12-12T18:15:02.039915413Z" level=info msg="Container 579d77e4b1712cfd01ea93685093f3aefd1bef3dbdf348c5104a74cf5edc9834: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:15:02.048572 containerd[1848]: time="2025-12-12T18:15:02.048525199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515-1-0-e-14f87f00b0,Uid:1e6dca2771c2ad4f206c0da74a69f151,Namespace:kube-system,Attempt:0,} returns sandbox id \"71112cd8a7cac7c3e43910dbacff6282d2abca1916d96d8118e763342c8109ac\"" Dec 12 18:15:02.051348 containerd[1848]: time="2025-12-12T18:15:02.051319782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515-1-0-e-14f87f00b0,Uid:cb9274a72e8e0806c6af77c8397b1934,Namespace:kube-system,Attempt:0,} returns sandbox id \"97585d37b9464f3bd753eeafab9c79cbfacedb7632826297f9de2cfd02452544\"" Dec 12 18:15:02.052497 containerd[1848]: time="2025-12-12T18:15:02.052478295Z" level=info msg="CreateContainer within sandbox \"71112cd8a7cac7c3e43910dbacff6282d2abca1916d96d8118e763342c8109ac\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 18:15:02.057839 containerd[1848]: time="2025-12-12T18:15:02.057788122Z" level=info msg="CreateContainer within sandbox \"0612cbeab0d8328162a256c78b535d4b539d4d6722721ae7e16c11e879923164\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"579d77e4b1712cfd01ea93685093f3aefd1bef3dbdf348c5104a74cf5edc9834\"" Dec 12 18:15:02.057918 containerd[1848]: time="2025-12-12T18:15:02.057839142Z" level=info msg="CreateContainer within sandbox \"97585d37b9464f3bd753eeafab9c79cbfacedb7632826297f9de2cfd02452544\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 18:15:02.058239 containerd[1848]: time="2025-12-12T18:15:02.058215323Z" level=info msg="StartContainer for \"579d77e4b1712cfd01ea93685093f3aefd1bef3dbdf348c5104a74cf5edc9834\"" Dec 12 18:15:02.059112 containerd[1848]: time="2025-12-12T18:15:02.059069526Z" level=info msg="connecting to shim 579d77e4b1712cfd01ea93685093f3aefd1bef3dbdf348c5104a74cf5edc9834" address="unix:///run/containerd/s/3884b6b3cef5fffe9423c1832acd01080b0ae55654923ae79a264410b0abc151" protocol=ttrpc version=3 Dec 12 18:15:02.064619 containerd[1848]: time="2025-12-12T18:15:02.064575354Z" level=info msg="Container de038c74656a5616e8a5f1b4bfdf4c835a5347cf67e838a46214f96d2e8fbd34: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:15:02.076142 containerd[1848]: time="2025-12-12T18:15:02.076101351Z" level=info msg="CreateContainer within sandbox \"71112cd8a7cac7c3e43910dbacff6282d2abca1916d96d8118e763342c8109ac\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"de038c74656a5616e8a5f1b4bfdf4c835a5347cf67e838a46214f96d2e8fbd34\"" Dec 12 18:15:02.076497 containerd[1848]: time="2025-12-12T18:15:02.076472574Z" level=info msg="StartContainer for \"de038c74656a5616e8a5f1b4bfdf4c835a5347cf67e838a46214f96d2e8fbd34\"" Dec 12 18:15:02.076671 containerd[1848]: time="2025-12-12T18:15:02.076625573Z" level=info msg="Container 60cf9e3b4e130cd98aa12616d5d9d20966b6a3cc13b6ff21855a3621aa679338: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:15:02.077373 containerd[1848]: time="2025-12-12T18:15:02.077346383Z" level=info msg="connecting to shim de038c74656a5616e8a5f1b4bfdf4c835a5347cf67e838a46214f96d2e8fbd34" address="unix:///run/containerd/s/778cf22491b9c224589e8c64aaa5bcdd3fa288119afb8fceb4e77eb33d6fb548" protocol=ttrpc version=3 Dec 12 18:15:02.079942 systemd[1]: Started cri-containerd-579d77e4b1712cfd01ea93685093f3aefd1bef3dbdf348c5104a74cf5edc9834.scope - libcontainer container 579d77e4b1712cfd01ea93685093f3aefd1bef3dbdf348c5104a74cf5edc9834. Dec 12 18:15:02.090451 containerd[1848]: time="2025-12-12T18:15:02.090397316Z" level=info msg="CreateContainer within sandbox \"97585d37b9464f3bd753eeafab9c79cbfacedb7632826297f9de2cfd02452544\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"60cf9e3b4e130cd98aa12616d5d9d20966b6a3cc13b6ff21855a3621aa679338\"" Dec 12 18:15:02.090907 containerd[1848]: time="2025-12-12T18:15:02.090883534Z" level=info msg="StartContainer for \"60cf9e3b4e130cd98aa12616d5d9d20966b6a3cc13b6ff21855a3621aa679338\"" Dec 12 18:15:02.091855 containerd[1848]: time="2025-12-12T18:15:02.091826254Z" level=info msg="connecting to shim 60cf9e3b4e130cd98aa12616d5d9d20966b6a3cc13b6ff21855a3621aa679338" address="unix:///run/containerd/s/6c65586a96ec9353c3e6fa6312cce20dde85b8d75bb9fd0789717e559a1874bf" protocol=ttrpc version=3 Dec 12 18:15:02.100917 systemd[1]: Started cri-containerd-de038c74656a5616e8a5f1b4bfdf4c835a5347cf67e838a46214f96d2e8fbd34.scope - libcontainer container de038c74656a5616e8a5f1b4bfdf4c835a5347cf67e838a46214f96d2e8fbd34. Dec 12 18:15:02.101000 audit: BPF prog-id=98 op=LOAD Dec 12 18:15:02.102000 audit: BPF prog-id=99 op=LOAD Dec 12 18:15:02.102000 audit[2898]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2772 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537396437376534623137313263666430316561393336383530393366 Dec 12 18:15:02.102000 audit: BPF prog-id=99 op=UNLOAD Dec 12 18:15:02.102000 audit[2898]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2772 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537396437376534623137313263666430316561393336383530393366 Dec 12 18:15:02.102000 audit: BPF prog-id=100 op=LOAD Dec 12 18:15:02.102000 audit[2898]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2772 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537396437376534623137313263666430316561393336383530393366 Dec 12 18:15:02.102000 audit: BPF prog-id=101 op=LOAD Dec 12 18:15:02.102000 audit[2898]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2772 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537396437376534623137313263666430316561393336383530393366 Dec 12 18:15:02.102000 audit: BPF prog-id=101 op=UNLOAD Dec 12 18:15:02.102000 audit[2898]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2772 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537396437376534623137313263666430316561393336383530393366 Dec 12 18:15:02.103000 audit: BPF prog-id=100 op=UNLOAD Dec 12 18:15:02.103000 audit[2898]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2772 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537396437376534623137313263666430316561393336383530393366 Dec 12 18:15:02.103000 audit: BPF prog-id=102 op=LOAD Dec 12 18:15:02.103000 audit[2898]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2772 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537396437376534623137313263666430316561393336383530393366 Dec 12 18:15:02.105350 systemd[1]: Started cri-containerd-60cf9e3b4e130cd98aa12616d5d9d20966b6a3cc13b6ff21855a3621aa679338.scope - libcontainer container 60cf9e3b4e130cd98aa12616d5d9d20966b6a3cc13b6ff21855a3621aa679338. Dec 12 18:15:02.114000 audit: BPF prog-id=103 op=LOAD Dec 12 18:15:02.115000 audit: BPF prog-id=104 op=LOAD Dec 12 18:15:02.115000 audit[2929]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2810 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630636639653362346531333063643938616131323631366435643964 Dec 12 18:15:02.115000 audit: BPF prog-id=104 op=UNLOAD Dec 12 18:15:02.115000 audit[2929]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2810 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630636639653362346531333063643938616131323631366435643964 Dec 12 18:15:02.115000 audit: BPF prog-id=105 op=LOAD Dec 12 18:15:02.115000 audit[2929]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2810 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630636639653362346531333063643938616131323631366435643964 Dec 12 18:15:02.115000 audit: BPF prog-id=106 op=LOAD Dec 12 18:15:02.115000 audit[2929]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2810 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630636639653362346531333063643938616131323631366435643964 Dec 12 18:15:02.115000 audit: BPF prog-id=106 op=UNLOAD Dec 12 18:15:02.115000 audit[2929]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2810 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630636639653362346531333063643938616131323631366435643964 Dec 12 18:15:02.115000 audit: BPF prog-id=105 op=UNLOAD Dec 12 18:15:02.115000 audit[2929]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2810 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630636639653362346531333063643938616131323631366435643964 Dec 12 18:15:02.115000 audit: BPF prog-id=107 op=LOAD Dec 12 18:15:02.115000 audit[2929]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2810 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630636639653362346531333063643938616131323631366435643964 Dec 12 18:15:02.116000 audit: BPF prog-id=108 op=LOAD Dec 12 18:15:02.116000 audit: BPF prog-id=109 op=LOAD Dec 12 18:15:02.116000 audit[2910]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2813 pid=2910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465303338633734363536613536313665386135663162346266646634 Dec 12 18:15:02.117000 audit: BPF prog-id=109 op=UNLOAD Dec 12 18:15:02.117000 audit[2910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2813 pid=2910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465303338633734363536613536313665386135663162346266646634 Dec 12 18:15:02.117000 audit: BPF prog-id=110 op=LOAD Dec 12 18:15:02.117000 audit[2910]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2813 pid=2910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465303338633734363536613536313665386135663162346266646634 Dec 12 18:15:02.117000 audit: BPF prog-id=111 op=LOAD Dec 12 18:15:02.117000 audit[2910]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2813 pid=2910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465303338633734363536613536313665386135663162346266646634 Dec 12 18:15:02.117000 audit: BPF prog-id=111 op=UNLOAD Dec 12 18:15:02.117000 audit[2910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2813 pid=2910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465303338633734363536613536313665386135663162346266646634 Dec 12 18:15:02.117000 audit: BPF prog-id=110 op=UNLOAD Dec 12 18:15:02.117000 audit[2910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2813 pid=2910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465303338633734363536613536313665386135663162346266646634 Dec 12 18:15:02.117000 audit: BPF prog-id=112 op=LOAD Dec 12 18:15:02.117000 audit[2910]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2813 pid=2910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:02.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465303338633734363536613536313665386135663162346266646634 Dec 12 18:15:02.155204 containerd[1848]: time="2025-12-12T18:15:02.155106996Z" level=info msg="StartContainer for \"579d77e4b1712cfd01ea93685093f3aefd1bef3dbdf348c5104a74cf5edc9834\" returns successfully" Dec 12 18:15:02.156684 containerd[1848]: time="2025-12-12T18:15:02.156630921Z" level=info msg="StartContainer for \"60cf9e3b4e130cd98aa12616d5d9d20966b6a3cc13b6ff21855a3621aa679338\" returns successfully" Dec 12 18:15:02.159680 containerd[1848]: time="2025-12-12T18:15:02.159639675Z" level=info msg="StartContainer for \"de038c74656a5616e8a5f1b4bfdf4c835a5347cf67e838a46214f96d2e8fbd34\" returns successfully" Dec 12 18:15:02.202377 kubelet[2723]: I1212 18:15:02.202347 2723 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:02.461474 kubelet[2723]: E1212 18:15:02.461299 2723 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-e-14f87f00b0\" not found" node="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:02.462134 kubelet[2723]: E1212 18:15:02.462120 2723 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-e-14f87f00b0\" not found" node="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:02.464775 kubelet[2723]: E1212 18:15:02.464756 2723 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-e-14f87f00b0\" not found" node="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:03.147972 kubelet[2723]: E1212 18:15:03.147856 2723 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4515-1-0-e-14f87f00b0\" not found" node="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:03.253713 kubelet[2723]: I1212 18:15:03.253654 2723 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:03.253713 kubelet[2723]: E1212 18:15:03.253704 2723 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-e-14f87f00b0\": node \"ci-4515-1-0-e-14f87f00b0\" not found" Dec 12 18:15:03.333068 kubelet[2723]: I1212 18:15:03.332831 2723 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:03.339140 kubelet[2723]: E1212 18:15:03.339102 2723 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515-1-0-e-14f87f00b0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:03.339140 kubelet[2723]: I1212 18:15:03.339129 2723 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:03.340628 kubelet[2723]: E1212 18:15:03.340479 2723 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515-1-0-e-14f87f00b0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:03.340628 kubelet[2723]: I1212 18:15:03.340500 2723 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:03.341601 kubelet[2723]: E1212 18:15:03.341583 2723 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515-1-0-e-14f87f00b0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:03.416239 kubelet[2723]: I1212 18:15:03.415902 2723 apiserver.go:52] "Watching apiserver" Dec 12 18:15:03.432947 kubelet[2723]: I1212 18:15:03.432870 2723 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:15:03.463495 kubelet[2723]: I1212 18:15:03.463452 2723 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:03.463594 kubelet[2723]: I1212 18:15:03.463578 2723 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:03.465671 kubelet[2723]: E1212 18:15:03.465642 2723 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515-1-0-e-14f87f00b0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:03.465730 kubelet[2723]: E1212 18:15:03.465688 2723 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515-1-0-e-14f87f00b0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:05.733750 systemd[1]: Reload requested from client PID 3016 ('systemctl') (unit session-9.scope)... Dec 12 18:15:05.733765 systemd[1]: Reloading... Dec 12 18:15:05.784792 zram_generator::config[3062]: No configuration found. Dec 12 18:15:05.985202 systemd[1]: Reloading finished in 251 ms. Dec 12 18:15:06.020513 kubelet[2723]: I1212 18:15:06.020374 2723 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:15:06.020441 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:15:06.035633 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 18:15:06.035955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:15:06.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:06.036019 systemd[1]: kubelet.service: Consumed 1.025s CPU time, 135.3M memory peak. Dec 12 18:15:06.037219 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 12 18:15:06.037281 kernel: audit: type=1131 audit(1765563306.034:396): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:06.037718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:15:06.036000 audit: BPF prog-id=113 op=LOAD Dec 12 18:15:06.040698 kernel: audit: type=1334 audit(1765563306.036:397): prog-id=113 op=LOAD Dec 12 18:15:06.036000 audit: BPF prog-id=78 op=UNLOAD Dec 12 18:15:06.041812 kernel: audit: type=1334 audit(1765563306.036:398): prog-id=78 op=UNLOAD Dec 12 18:15:06.036000 audit: BPF prog-id=114 op=LOAD Dec 12 18:15:06.042941 kernel: audit: type=1334 audit(1765563306.036:399): prog-id=114 op=LOAD Dec 12 18:15:06.036000 audit: BPF prog-id=115 op=LOAD Dec 12 18:15:06.044059 kernel: audit: type=1334 audit(1765563306.036:400): prog-id=115 op=LOAD Dec 12 18:15:06.036000 audit: BPF prog-id=79 op=UNLOAD Dec 12 18:15:06.045859 kernel: audit: type=1334 audit(1765563306.036:401): prog-id=79 op=UNLOAD Dec 12 18:15:06.045897 kernel: audit: type=1334 audit(1765563306.036:402): prog-id=80 op=UNLOAD Dec 12 18:15:06.036000 audit: BPF prog-id=80 op=UNLOAD Dec 12 18:15:06.037000 audit: BPF prog-id=116 op=LOAD Dec 12 18:15:06.047939 kernel: audit: type=1334 audit(1765563306.037:403): prog-id=116 op=LOAD Dec 12 18:15:06.047980 kernel: audit: type=1334 audit(1765563306.037:404): prog-id=72 op=UNLOAD Dec 12 18:15:06.037000 audit: BPF prog-id=72 op=UNLOAD Dec 12 18:15:06.037000 audit: BPF prog-id=117 op=LOAD Dec 12 18:15:06.049716 kernel: audit: type=1334 audit(1765563306.037:405): prog-id=117 op=LOAD Dec 12 18:15:06.037000 audit: BPF prog-id=118 op=LOAD Dec 12 18:15:06.037000 audit: BPF prog-id=73 op=UNLOAD Dec 12 18:15:06.037000 audit: BPF prog-id=74 op=UNLOAD Dec 12 18:15:06.037000 audit: BPF prog-id=119 op=LOAD Dec 12 18:15:06.037000 audit: BPF prog-id=120 op=LOAD Dec 12 18:15:06.037000 audit: BPF prog-id=67 op=UNLOAD Dec 12 18:15:06.037000 audit: BPF prog-id=68 op=UNLOAD Dec 12 18:15:06.038000 audit: BPF prog-id=121 op=LOAD Dec 12 18:15:06.038000 audit: BPF prog-id=81 op=UNLOAD Dec 12 18:15:06.039000 audit: BPF prog-id=122 op=LOAD Dec 12 18:15:06.039000 audit: BPF prog-id=69 op=UNLOAD Dec 12 18:15:06.039000 audit: BPF prog-id=123 op=LOAD Dec 12 18:15:06.039000 audit: BPF prog-id=124 op=LOAD Dec 12 18:15:06.039000 audit: BPF prog-id=70 op=UNLOAD Dec 12 18:15:06.039000 audit: BPF prog-id=71 op=UNLOAD Dec 12 18:15:06.040000 audit: BPF prog-id=125 op=LOAD Dec 12 18:15:06.040000 audit: BPF prog-id=66 op=UNLOAD Dec 12 18:15:06.049000 audit: BPF prog-id=126 op=LOAD Dec 12 18:15:06.049000 audit: BPF prog-id=75 op=UNLOAD Dec 12 18:15:06.049000 audit: BPF prog-id=127 op=LOAD Dec 12 18:15:06.049000 audit: BPF prog-id=128 op=LOAD Dec 12 18:15:06.049000 audit: BPF prog-id=76 op=UNLOAD Dec 12 18:15:06.049000 audit: BPF prog-id=77 op=UNLOAD Dec 12 18:15:06.050000 audit: BPF prog-id=129 op=LOAD Dec 12 18:15:06.050000 audit: BPF prog-id=82 op=UNLOAD Dec 12 18:15:06.051000 audit: BPF prog-id=130 op=LOAD Dec 12 18:15:06.051000 audit: BPF prog-id=63 op=UNLOAD Dec 12 18:15:06.051000 audit: BPF prog-id=131 op=LOAD Dec 12 18:15:06.051000 audit: BPF prog-id=132 op=LOAD Dec 12 18:15:06.051000 audit: BPF prog-id=64 op=UNLOAD Dec 12 18:15:06.051000 audit: BPF prog-id=65 op=UNLOAD Dec 12 18:15:06.178858 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:15:06.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:06.183578 (kubelet)[3114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:15:06.220438 kubelet[3114]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:15:06.220438 kubelet[3114]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:15:06.220438 kubelet[3114]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:15:06.221178 kubelet[3114]: I1212 18:15:06.220492 3114 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:15:06.228692 kubelet[3114]: I1212 18:15:06.228074 3114 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 18:15:06.228692 kubelet[3114]: I1212 18:15:06.228100 3114 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:15:06.228692 kubelet[3114]: I1212 18:15:06.228305 3114 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:15:06.229984 kubelet[3114]: I1212 18:15:06.229503 3114 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 18:15:06.231548 kubelet[3114]: I1212 18:15:06.231506 3114 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:15:06.234507 kubelet[3114]: I1212 18:15:06.234478 3114 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:15:06.241092 kubelet[3114]: I1212 18:15:06.240980 3114 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:15:06.241187 kubelet[3114]: I1212 18:15:06.241164 3114 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:15:06.241323 kubelet[3114]: I1212 18:15:06.241182 3114 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515-1-0-e-14f87f00b0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:15:06.241323 kubelet[3114]: I1212 18:15:06.241322 3114 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:15:06.241433 kubelet[3114]: I1212 18:15:06.241330 3114 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 18:15:06.241433 kubelet[3114]: I1212 18:15:06.241369 3114 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:15:06.241528 kubelet[3114]: I1212 18:15:06.241517 3114 kubelet.go:480] "Attempting to sync node with API server" Dec 12 18:15:06.241553 kubelet[3114]: I1212 18:15:06.241529 3114 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:15:06.241553 kubelet[3114]: I1212 18:15:06.241548 3114 kubelet.go:386] "Adding apiserver pod source" Dec 12 18:15:06.241592 kubelet[3114]: I1212 18:15:06.241562 3114 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:15:06.242642 kubelet[3114]: I1212 18:15:06.242354 3114 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 18:15:06.242805 kubelet[3114]: I1212 18:15:06.242795 3114 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:15:06.244999 kubelet[3114]: I1212 18:15:06.244973 3114 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:15:06.245079 kubelet[3114]: I1212 18:15:06.245013 3114 server.go:1289] "Started kubelet" Dec 12 18:15:06.245780 kubelet[3114]: I1212 18:15:06.245714 3114 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:15:06.246394 kubelet[3114]: I1212 18:15:06.245794 3114 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:15:06.246394 kubelet[3114]: I1212 18:15:06.246348 3114 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:15:06.246751 kubelet[3114]: I1212 18:15:06.246650 3114 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:15:06.247173 kubelet[3114]: I1212 18:15:06.247157 3114 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:15:06.247347 kubelet[3114]: I1212 18:15:06.247253 3114 server.go:317] "Adding debug handlers to kubelet server" Dec 12 18:15:06.249175 kubelet[3114]: E1212 18:15:06.249156 3114 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515-1-0-e-14f87f00b0\" not found" Dec 12 18:15:06.249229 kubelet[3114]: I1212 18:15:06.249188 3114 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:15:06.250863 kubelet[3114]: I1212 18:15:06.249620 3114 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:15:06.250863 kubelet[3114]: I1212 18:15:06.249775 3114 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:15:06.252175 kubelet[3114]: I1212 18:15:06.252146 3114 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:15:06.252375 kubelet[3114]: I1212 18:15:06.252355 3114 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:15:06.253097 kubelet[3114]: E1212 18:15:06.253070 3114 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:15:06.255964 kubelet[3114]: I1212 18:15:06.255940 3114 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:15:06.263174 kubelet[3114]: I1212 18:15:06.263143 3114 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 18:15:06.264185 kubelet[3114]: I1212 18:15:06.264165 3114 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 18:15:06.264185 kubelet[3114]: I1212 18:15:06.264184 3114 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 18:15:06.264283 kubelet[3114]: I1212 18:15:06.264202 3114 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:15:06.264283 kubelet[3114]: I1212 18:15:06.264209 3114 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 18:15:06.264283 kubelet[3114]: E1212 18:15:06.264258 3114 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:15:06.282601 kubelet[3114]: I1212 18:15:06.282564 3114 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:15:06.282601 kubelet[3114]: I1212 18:15:06.282586 3114 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:15:06.282601 kubelet[3114]: I1212 18:15:06.282607 3114 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:15:06.282830 kubelet[3114]: I1212 18:15:06.282748 3114 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 18:15:06.282830 kubelet[3114]: I1212 18:15:06.282757 3114 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 18:15:06.282830 kubelet[3114]: I1212 18:15:06.282772 3114 policy_none.go:49] "None policy: Start" Dec 12 18:15:06.282830 kubelet[3114]: I1212 18:15:06.282780 3114 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:15:06.282830 kubelet[3114]: I1212 18:15:06.282800 3114 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:15:06.282926 kubelet[3114]: I1212 18:15:06.282879 3114 state_mem.go:75] "Updated machine memory state" Dec 12 18:15:06.286256 kubelet[3114]: E1212 18:15:06.286199 3114 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:15:06.286461 kubelet[3114]: I1212 18:15:06.286438 3114 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:15:06.286489 kubelet[3114]: I1212 18:15:06.286453 3114 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:15:06.287202 kubelet[3114]: I1212 18:15:06.287157 3114 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:15:06.287894 kubelet[3114]: E1212 18:15:06.287871 3114 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:15:06.365035 kubelet[3114]: I1212 18:15:06.364976 3114 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:06.365035 kubelet[3114]: I1212 18:15:06.365002 3114 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:06.365198 kubelet[3114]: I1212 18:15:06.365117 3114 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:06.389202 kubelet[3114]: I1212 18:15:06.389152 3114 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:06.396238 kubelet[3114]: I1212 18:15:06.396201 3114 kubelet_node_status.go:124] "Node was previously registered" node="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:06.396352 kubelet[3114]: I1212 18:15:06.396286 3114 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:06.451248 kubelet[3114]: I1212 18:15:06.451198 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e6dca2771c2ad4f206c0da74a69f151-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515-1-0-e-14f87f00b0\" (UID: \"1e6dca2771c2ad4f206c0da74a69f151\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:06.451449 kubelet[3114]: I1212 18:15:06.451284 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb9274a72e8e0806c6af77c8397b1934-kubeconfig\") pod \"kube-scheduler-ci-4515-1-0-e-14f87f00b0\" (UID: \"cb9274a72e8e0806c6af77c8397b1934\") " pod="kube-system/kube-scheduler-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:06.451449 kubelet[3114]: I1212 18:15:06.451314 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e6dca2771c2ad4f206c0da74a69f151-ca-certs\") pod \"kube-controller-manager-ci-4515-1-0-e-14f87f00b0\" (UID: \"1e6dca2771c2ad4f206c0da74a69f151\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:06.451449 kubelet[3114]: I1212 18:15:06.451335 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e6dca2771c2ad4f206c0da74a69f151-k8s-certs\") pod \"kube-controller-manager-ci-4515-1-0-e-14f87f00b0\" (UID: \"1e6dca2771c2ad4f206c0da74a69f151\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:06.451449 kubelet[3114]: I1212 18:15:06.451353 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1e6dca2771c2ad4f206c0da74a69f151-kubeconfig\") pod \"kube-controller-manager-ci-4515-1-0-e-14f87f00b0\" (UID: \"1e6dca2771c2ad4f206c0da74a69f151\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:06.451449 kubelet[3114]: I1212 18:15:06.451367 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b94aa12fd8673d6357477d6df957aaf2-ca-certs\") pod \"kube-apiserver-ci-4515-1-0-e-14f87f00b0\" (UID: \"b94aa12fd8673d6357477d6df957aaf2\") " pod="kube-system/kube-apiserver-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:06.451696 kubelet[3114]: I1212 18:15:06.451380 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b94aa12fd8673d6357477d6df957aaf2-k8s-certs\") pod \"kube-apiserver-ci-4515-1-0-e-14f87f00b0\" (UID: \"b94aa12fd8673d6357477d6df957aaf2\") " pod="kube-system/kube-apiserver-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:06.451696 kubelet[3114]: I1212 18:15:06.451410 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b94aa12fd8673d6357477d6df957aaf2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515-1-0-e-14f87f00b0\" (UID: \"b94aa12fd8673d6357477d6df957aaf2\") " pod="kube-system/kube-apiserver-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:06.451696 kubelet[3114]: I1212 18:15:06.451427 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1e6dca2771c2ad4f206c0da74a69f151-flexvolume-dir\") pod \"kube-controller-manager-ci-4515-1-0-e-14f87f00b0\" (UID: \"1e6dca2771c2ad4f206c0da74a69f151\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:06.493140 update_engine[1829]: I20251212 18:15:06.492949 1829 update_attempter.cc:509] Updating boot flags... Dec 12 18:15:07.242084 kubelet[3114]: I1212 18:15:07.242007 3114 apiserver.go:52] "Watching apiserver" Dec 12 18:15:07.249794 kubelet[3114]: I1212 18:15:07.249748 3114 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:15:07.272437 kubelet[3114]: I1212 18:15:07.272408 3114 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:07.273442 kubelet[3114]: I1212 18:15:07.272692 3114 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:07.273442 kubelet[3114]: I1212 18:15:07.272765 3114 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:07.278500 kubelet[3114]: E1212 18:15:07.278460 3114 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515-1-0-e-14f87f00b0\" already exists" pod="kube-system/kube-scheduler-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:07.279217 kubelet[3114]: E1212 18:15:07.278779 3114 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515-1-0-e-14f87f00b0\" already exists" pod="kube-system/kube-apiserver-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:07.279217 kubelet[3114]: E1212 18:15:07.279191 3114 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515-1-0-e-14f87f00b0\" already exists" pod="kube-system/kube-controller-manager-ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:07.303864 kubelet[3114]: I1212 18:15:07.303795 3114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4515-1-0-e-14f87f00b0" podStartSLOduration=1.303774753 podStartE2EDuration="1.303774753s" podCreationTimestamp="2025-12-12 18:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:15:07.292356207 +0000 UTC m=+1.104879339" watchObservedRunningTime="2025-12-12 18:15:07.303774753 +0000 UTC m=+1.116297866" Dec 12 18:15:07.313489 kubelet[3114]: I1212 18:15:07.313410 3114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4515-1-0-e-14f87f00b0" podStartSLOduration=1.313393239 podStartE2EDuration="1.313393239s" podCreationTimestamp="2025-12-12 18:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:15:07.313372602 +0000 UTC m=+1.125895736" watchObservedRunningTime="2025-12-12 18:15:07.313393239 +0000 UTC m=+1.125916376" Dec 12 18:15:07.313638 kubelet[3114]: I1212 18:15:07.313520 3114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4515-1-0-e-14f87f00b0" podStartSLOduration=1.313515349 podStartE2EDuration="1.313515349s" podCreationTimestamp="2025-12-12 18:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:15:07.303923364 +0000 UTC m=+1.116446483" watchObservedRunningTime="2025-12-12 18:15:07.313515349 +0000 UTC m=+1.126038603" Dec 12 18:15:11.338376 kubelet[3114]: I1212 18:15:11.338317 3114 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 18:15:11.339121 kubelet[3114]: I1212 18:15:11.338912 3114 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 18:15:11.339189 containerd[1848]: time="2025-12-12T18:15:11.338703913Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 18:15:12.292016 systemd[1]: Created slice kubepods-besteffort-podf641c479_d4a3_4a23_b8fe_7068b242662e.slice - libcontainer container kubepods-besteffort-podf641c479_d4a3_4a23_b8fe_7068b242662e.slice. Dec 12 18:15:12.388466 kubelet[3114]: I1212 18:15:12.388392 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f641c479-d4a3-4a23-b8fe-7068b242662e-lib-modules\") pod \"kube-proxy-bzdcg\" (UID: \"f641c479-d4a3-4a23-b8fe-7068b242662e\") " pod="kube-system/kube-proxy-bzdcg" Dec 12 18:15:12.388466 kubelet[3114]: I1212 18:15:12.388440 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f641c479-d4a3-4a23-b8fe-7068b242662e-kube-proxy\") pod \"kube-proxy-bzdcg\" (UID: \"f641c479-d4a3-4a23-b8fe-7068b242662e\") " pod="kube-system/kube-proxy-bzdcg" Dec 12 18:15:12.388466 kubelet[3114]: I1212 18:15:12.388456 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzm8s\" (UniqueName: \"kubernetes.io/projected/f641c479-d4a3-4a23-b8fe-7068b242662e-kube-api-access-pzm8s\") pod \"kube-proxy-bzdcg\" (UID: \"f641c479-d4a3-4a23-b8fe-7068b242662e\") " pod="kube-system/kube-proxy-bzdcg" Dec 12 18:15:12.388466 kubelet[3114]: I1212 18:15:12.388473 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f641c479-d4a3-4a23-b8fe-7068b242662e-xtables-lock\") pod \"kube-proxy-bzdcg\" (UID: \"f641c479-d4a3-4a23-b8fe-7068b242662e\") " pod="kube-system/kube-proxy-bzdcg" Dec 12 18:15:12.561024 systemd[1]: Created slice kubepods-besteffort-podd8ca68cd_5422_42bf_b546_b6cfae705403.slice - libcontainer container kubepods-besteffort-podd8ca68cd_5422_42bf_b546_b6cfae705403.slice. Dec 12 18:15:12.589774 kubelet[3114]: I1212 18:15:12.589735 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d8ca68cd-5422-42bf-b546-b6cfae705403-var-lib-calico\") pod \"tigera-operator-7dcd859c48-lkdvm\" (UID: \"d8ca68cd-5422-42bf-b546-b6cfae705403\") " pod="tigera-operator/tigera-operator-7dcd859c48-lkdvm" Dec 12 18:15:12.589997 kubelet[3114]: I1212 18:15:12.589947 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54xgr\" (UniqueName: \"kubernetes.io/projected/d8ca68cd-5422-42bf-b546-b6cfae705403-kube-api-access-54xgr\") pod \"tigera-operator-7dcd859c48-lkdvm\" (UID: \"d8ca68cd-5422-42bf-b546-b6cfae705403\") " pod="tigera-operator/tigera-operator-7dcd859c48-lkdvm" Dec 12 18:15:12.603565 containerd[1848]: time="2025-12-12T18:15:12.603471578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bzdcg,Uid:f641c479-d4a3-4a23-b8fe-7068b242662e,Namespace:kube-system,Attempt:0,}" Dec 12 18:15:12.627148 containerd[1848]: time="2025-12-12T18:15:12.627100239Z" level=info msg="connecting to shim 9fb9b85d67f16029f6a39d921a1fb4eca3a7a841c7177d5e5d781a1b9b09cfda" address="unix:///run/containerd/s/f2bf1b4b1bf2546c07a5ecf013cdbc627ad5620b4e66f03497206da11f7489b2" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:15:12.658955 systemd[1]: Started cri-containerd-9fb9b85d67f16029f6a39d921a1fb4eca3a7a841c7177d5e5d781a1b9b09cfda.scope - libcontainer container 9fb9b85d67f16029f6a39d921a1fb4eca3a7a841c7177d5e5d781a1b9b09cfda. Dec 12 18:15:12.666000 audit: BPF prog-id=133 op=LOAD Dec 12 18:15:12.668843 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 12 18:15:12.668920 kernel: audit: type=1334 audit(1765563312.666:438): prog-id=133 op=LOAD Dec 12 18:15:12.667000 audit: BPF prog-id=134 op=LOAD Dec 12 18:15:12.670943 kernel: audit: type=1334 audit(1765563312.667:439): prog-id=134 op=LOAD Dec 12 18:15:12.671004 kernel: audit: type=1300 audit(1765563312.667:439): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3204 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.667000 audit[3216]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3204 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966623962383564363766313630323966366133396439323161316662 Dec 12 18:15:12.675722 kernel: audit: type=1327 audit(1765563312.667:439): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966623962383564363766313630323966366133396439323161316662 Dec 12 18:15:12.667000 audit: BPF prog-id=134 op=UNLOAD Dec 12 18:15:12.678676 kernel: audit: type=1334 audit(1765563312.667:440): prog-id=134 op=UNLOAD Dec 12 18:15:12.667000 audit[3216]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3204 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.680514 kernel: audit: type=1300 audit(1765563312.667:440): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3204 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966623962383564363766313630323966366133396439323161316662 Dec 12 18:15:12.684255 kernel: audit: type=1327 audit(1765563312.667:440): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966623962383564363766313630323966366133396439323161316662 Dec 12 18:15:12.667000 audit: BPF prog-id=135 op=LOAD Dec 12 18:15:12.687951 kernel: audit: type=1334 audit(1765563312.667:441): prog-id=135 op=LOAD Dec 12 18:15:12.688009 kernel: audit: type=1300 audit(1765563312.667:441): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3204 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.667000 audit[3216]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3204 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.691686 containerd[1848]: time="2025-12-12T18:15:12.691634449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bzdcg,Uid:f641c479-d4a3-4a23-b8fe-7068b242662e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fb9b85d67f16029f6a39d921a1fb4eca3a7a841c7177d5e5d781a1b9b09cfda\"" Dec 12 18:15:12.697106 kernel: audit: type=1327 audit(1765563312.667:441): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966623962383564363766313630323966366133396439323161316662 Dec 12 18:15:12.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966623962383564363766313630323966366133396439323161316662 Dec 12 18:15:12.667000 audit: BPF prog-id=136 op=LOAD Dec 12 18:15:12.667000 audit[3216]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3204 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966623962383564363766313630323966366133396439323161316662 Dec 12 18:15:12.667000 audit: BPF prog-id=136 op=UNLOAD Dec 12 18:15:12.667000 audit[3216]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3204 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966623962383564363766313630323966366133396439323161316662 Dec 12 18:15:12.667000 audit: BPF prog-id=135 op=UNLOAD Dec 12 18:15:12.667000 audit[3216]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3204 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966623962383564363766313630323966366133396439323161316662 Dec 12 18:15:12.667000 audit: BPF prog-id=137 op=LOAD Dec 12 18:15:12.667000 audit[3216]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3204 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966623962383564363766313630323966366133396439323161316662 Dec 12 18:15:12.700978 containerd[1848]: time="2025-12-12T18:15:12.700932751Z" level=info msg="CreateContainer within sandbox \"9fb9b85d67f16029f6a39d921a1fb4eca3a7a841c7177d5e5d781a1b9b09cfda\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 18:15:12.714507 containerd[1848]: time="2025-12-12T18:15:12.714434471Z" level=info msg="Container fcb3c3a28a17b223e130336743d2fb4b5c5c394066f7a2692f92105d5a0baf01: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:15:12.725455 containerd[1848]: time="2025-12-12T18:15:12.725388325Z" level=info msg="CreateContainer within sandbox \"9fb9b85d67f16029f6a39d921a1fb4eca3a7a841c7177d5e5d781a1b9b09cfda\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fcb3c3a28a17b223e130336743d2fb4b5c5c394066f7a2692f92105d5a0baf01\"" Dec 12 18:15:12.726194 containerd[1848]: time="2025-12-12T18:15:12.726151434Z" level=info msg="StartContainer for \"fcb3c3a28a17b223e130336743d2fb4b5c5c394066f7a2692f92105d5a0baf01\"" Dec 12 18:15:12.728037 containerd[1848]: time="2025-12-12T18:15:12.727989787Z" level=info msg="connecting to shim fcb3c3a28a17b223e130336743d2fb4b5c5c394066f7a2692f92105d5a0baf01" address="unix:///run/containerd/s/f2bf1b4b1bf2546c07a5ecf013cdbc627ad5620b4e66f03497206da11f7489b2" protocol=ttrpc version=3 Dec 12 18:15:12.760961 systemd[1]: Started cri-containerd-fcb3c3a28a17b223e130336743d2fb4b5c5c394066f7a2692f92105d5a0baf01.scope - libcontainer container fcb3c3a28a17b223e130336743d2fb4b5c5c394066f7a2692f92105d5a0baf01. Dec 12 18:15:12.824000 audit: BPF prog-id=138 op=LOAD Dec 12 18:15:12.824000 audit[3244]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3204 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663623363336132386131376232323365313330333336373433643266 Dec 12 18:15:12.824000 audit: BPF prog-id=139 op=LOAD Dec 12 18:15:12.824000 audit[3244]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3204 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663623363336132386131376232323365313330333336373433643266 Dec 12 18:15:12.824000 audit: BPF prog-id=139 op=UNLOAD Dec 12 18:15:12.824000 audit[3244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3204 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663623363336132386131376232323365313330333336373433643266 Dec 12 18:15:12.824000 audit: BPF prog-id=138 op=UNLOAD Dec 12 18:15:12.824000 audit[3244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3204 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663623363336132386131376232323365313330333336373433643266 Dec 12 18:15:12.824000 audit: BPF prog-id=140 op=LOAD Dec 12 18:15:12.824000 audit[3244]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3204 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663623363336132386131376232323365313330333336373433643266 Dec 12 18:15:12.845456 containerd[1848]: time="2025-12-12T18:15:12.845423947Z" level=info msg="StartContainer for \"fcb3c3a28a17b223e130336743d2fb4b5c5c394066f7a2692f92105d5a0baf01\" returns successfully" Dec 12 18:15:12.864467 containerd[1848]: time="2025-12-12T18:15:12.864426143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-lkdvm,Uid:d8ca68cd-5422-42bf-b546-b6cfae705403,Namespace:tigera-operator,Attempt:0,}" Dec 12 18:15:12.890547 containerd[1848]: time="2025-12-12T18:15:12.890497770Z" level=info msg="connecting to shim 112724f19b8143a22ebb263b3856f10942439a24f55c1ff4612e66d05c93cbfb" address="unix:///run/containerd/s/066131dd5b783385e45471e969109d2f93ff2be3bbc1d4b2293204d4e2d07319" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:15:12.923944 systemd[1]: Started cri-containerd-112724f19b8143a22ebb263b3856f10942439a24f55c1ff4612e66d05c93cbfb.scope - libcontainer container 112724f19b8143a22ebb263b3856f10942439a24f55c1ff4612e66d05c93cbfb. Dec 12 18:15:12.933000 audit: BPF prog-id=141 op=LOAD Dec 12 18:15:12.934000 audit: BPF prog-id=142 op=LOAD Dec 12 18:15:12.934000 audit[3303]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3291 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131323732346631396238313433613232656262323633623338353666 Dec 12 18:15:12.934000 audit: BPF prog-id=142 op=UNLOAD Dec 12 18:15:12.934000 audit[3303]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3291 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131323732346631396238313433613232656262323633623338353666 Dec 12 18:15:12.934000 audit: BPF prog-id=143 op=LOAD Dec 12 18:15:12.934000 audit[3303]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3291 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131323732346631396238313433613232656262323633623338353666 Dec 12 18:15:12.934000 audit: BPF prog-id=144 op=LOAD Dec 12 18:15:12.934000 audit[3303]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3291 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131323732346631396238313433613232656262323633623338353666 Dec 12 18:15:12.934000 audit: BPF prog-id=144 op=UNLOAD Dec 12 18:15:12.934000 audit[3303]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3291 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131323732346631396238313433613232656262323633623338353666 Dec 12 18:15:12.934000 audit: BPF prog-id=143 op=UNLOAD Dec 12 18:15:12.934000 audit[3303]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3291 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131323732346631396238313433613232656262323633623338353666 Dec 12 18:15:12.934000 audit: BPF prog-id=145 op=LOAD Dec 12 18:15:12.934000 audit[3303]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3291 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131323732346631396238313433613232656262323633623338353666 Dec 12 18:15:12.970706 containerd[1848]: time="2025-12-12T18:15:12.970237527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-lkdvm,Uid:d8ca68cd-5422-42bf-b546-b6cfae705403,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"112724f19b8143a22ebb263b3856f10942439a24f55c1ff4612e66d05c93cbfb\"" Dec 12 18:15:12.972205 containerd[1848]: time="2025-12-12T18:15:12.972172179Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 18:15:12.987000 audit[3360]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3360 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:12.987000 audit[3360]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc21e314d0 a2=0 a3=7ffc21e314bc items=0 ppid=3256 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.987000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 18:15:12.987000 audit[3359]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3359 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:12.987000 audit[3359]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef69c39e0 a2=0 a3=67baed7d30afb1f1 items=0 ppid=3256 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.987000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 18:15:12.989000 audit[3362]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3362 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:12.989000 audit[3362]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe7a19c750 a2=0 a3=7ffe7a19c73c items=0 ppid=3256 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.989000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 18:15:12.990000 audit[3365]: NETFILTER_CFG table=filter:57 family=10 entries=1 op=nft_register_chain pid=3365 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:12.990000 audit[3365]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd1d406600 a2=0 a3=7ffd1d4065ec items=0 ppid=3256 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.990000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 18:15:12.990000 audit[3364]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3364 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:12.990000 audit[3364]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff445084f0 a2=0 a3=7fff445084dc items=0 ppid=3256 pid=3364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.990000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 18:15:12.991000 audit[3366]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3366 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:12.991000 audit[3366]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffb7538ca0 a2=0 a3=7fffb7538c8c items=0 ppid=3256 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:12.991000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 18:15:13.089000 audit[3368]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3368 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:13.089000 audit[3368]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcb78454b0 a2=0 a3=7ffcb784549c items=0 ppid=3256 pid=3368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.089000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 18:15:13.093000 audit[3370]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3370 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:13.093000 audit[3370]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcc2e24b20 a2=0 a3=7ffcc2e24b0c items=0 ppid=3256 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.093000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 12 18:15:13.096000 audit[3373]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3373 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:13.096000 audit[3373]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff15953250 a2=0 a3=7fff1595323c items=0 ppid=3256 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.096000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 12 18:15:13.098000 audit[3374]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3374 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:13.098000 audit[3374]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe248aed20 a2=0 a3=7ffe248aed0c items=0 ppid=3256 pid=3374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.098000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 18:15:13.100000 audit[3376]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3376 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:13.100000 audit[3376]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc45bf27b0 a2=0 a3=7ffc45bf279c items=0 ppid=3256 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.100000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 18:15:13.102000 audit[3377]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:13.102000 audit[3377]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6d6b0240 a2=0 a3=7fff6d6b022c items=0 ppid=3256 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.102000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 18:15:13.104000 audit[3379]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3379 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:13.104000 audit[3379]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe66bbf000 a2=0 a3=7ffe66bbefec items=0 ppid=3256 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.104000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 18:15:13.108000 audit[3382]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3382 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:13.108000 audit[3382]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffed7d66a40 a2=0 a3=7ffed7d66a2c items=0 ppid=3256 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.108000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 12 18:15:13.109000 audit[3383]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3383 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:13.109000 audit[3383]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe5448940 a2=0 a3=7fffe544892c items=0 ppid=3256 pid=3383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.109000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 18:15:13.112000 audit[3385]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3385 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:13.112000 audit[3385]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd5116a8a0 a2=0 a3=7ffd5116a88c items=0 ppid=3256 pid=3385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.112000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 18:15:13.113000 audit[3386]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3386 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:13.113000 audit[3386]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff6b0246b0 a2=0 a3=7fff6b02469c items=0 ppid=3256 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.113000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 18:15:13.116000 audit[3388]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3388 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:13.116000 audit[3388]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe4af9da90 a2=0 a3=7ffe4af9da7c items=0 ppid=3256 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.116000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 18:15:13.120000 audit[3391]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3391 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:13.120000 audit[3391]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff18a40180 a2=0 a3=7fff18a4016c items=0 ppid=3256 pid=3391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.120000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 18:15:13.123000 audit[3394]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3394 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:13.123000 audit[3394]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc27e279b0 a2=0 a3=7ffc27e2799c items=0 ppid=3256 pid=3394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.123000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 18:15:13.126000 audit[3395]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3395 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:13.126000 audit[3395]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd3ff86330 a2=0 a3=7ffd3ff8631c items=0 ppid=3256 pid=3395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.126000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 18:15:13.128000 audit[3397]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3397 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:13.128000 audit[3397]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc2b1cad70 a2=0 a3=7ffc2b1cad5c items=0 ppid=3256 pid=3397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.128000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:15:13.132000 audit[3400]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3400 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:13.132000 audit[3400]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc36f9d440 a2=0 a3=7ffc36f9d42c items=0 ppid=3256 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.132000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:15:13.134000 audit[3401]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3401 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:13.134000 audit[3401]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0e1dab40 a2=0 a3=7fff0e1dab2c items=0 ppid=3256 pid=3401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.134000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 18:15:13.137000 audit[3403]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3403 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:15:13.137000 audit[3403]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffddb617260 a2=0 a3=7ffddb61724c items=0 ppid=3256 pid=3403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.137000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 18:15:13.162000 audit[3409]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:13.162000 audit[3409]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff99d6f030 a2=0 a3=7fff99d6f01c items=0 ppid=3256 pid=3409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.162000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:13.186000 audit[3409]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:13.186000 audit[3409]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fff99d6f030 a2=0 a3=7fff99d6f01c items=0 ppid=3256 pid=3409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.186000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:13.188000 audit[3414]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3414 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:13.188000 audit[3414]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc739af390 a2=0 a3=7ffc739af37c items=0 ppid=3256 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.188000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 18:15:13.191000 audit[3416]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3416 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:13.191000 audit[3416]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc1825b860 a2=0 a3=7ffc1825b84c items=0 ppid=3256 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.191000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 12 18:15:13.195000 audit[3419]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3419 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:13.195000 audit[3419]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc6d028070 a2=0 a3=7ffc6d02805c items=0 ppid=3256 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.195000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 12 18:15:13.196000 audit[3420]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3420 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:13.196000 audit[3420]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd9f8c270 a2=0 a3=7ffcd9f8c25c items=0 ppid=3256 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.196000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 18:15:13.199000 audit[3422]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3422 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:13.199000 audit[3422]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd66da0450 a2=0 a3=7ffd66da043c items=0 ppid=3256 pid=3422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.199000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 18:15:13.200000 audit[3423]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3423 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:13.200000 audit[3423]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcbb7dbfd0 a2=0 a3=7ffcbb7dbfbc items=0 ppid=3256 pid=3423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.200000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 18:15:13.203000 audit[3425]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3425 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:13.203000 audit[3425]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe588d4570 a2=0 a3=7ffe588d455c items=0 ppid=3256 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.203000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 12 18:15:13.207000 audit[3428]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3428 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:13.207000 audit[3428]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fffcbe1bb70 a2=0 a3=7fffcbe1bb5c items=0 ppid=3256 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.207000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 18:15:13.208000 audit[3429]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3429 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:13.208000 audit[3429]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd85454050 a2=0 a3=7ffd8545403c items=0 ppid=3256 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.208000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 18:15:13.211000 audit[3431]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3431 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:13.211000 audit[3431]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffefb2dd9a0 a2=0 a3=7ffefb2dd98c items=0 ppid=3256 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.211000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 18:15:13.212000 audit[3432]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3432 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:13.212000 audit[3432]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffea6361430 a2=0 a3=7ffea636141c items=0 ppid=3256 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.212000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 18:15:13.215000 audit[3434]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3434 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:13.215000 audit[3434]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcc838f250 a2=0 a3=7ffcc838f23c items=0 ppid=3256 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.215000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 18:15:13.219000 audit[3437]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3437 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:13.219000 audit[3437]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffb19cf9d0 a2=0 a3=7fffb19cf9bc items=0 ppid=3256 pid=3437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.219000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 18:15:13.223000 audit[3440]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3440 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:13.223000 audit[3440]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdd3501990 a2=0 a3=7ffdd350197c items=0 ppid=3256 pid=3440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.223000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 12 18:15:13.225000 audit[3441]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3441 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:13.225000 audit[3441]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc70916520 a2=0 a3=7ffc7091650c items=0 ppid=3256 pid=3441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.225000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 18:15:13.227000 audit[3443]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3443 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:13.227000 audit[3443]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe24e00420 a2=0 a3=7ffe24e0040c items=0 ppid=3256 pid=3443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.227000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:15:13.231000 audit[3446]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3446 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:13.231000 audit[3446]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc82f53250 a2=0 a3=7ffc82f5323c items=0 ppid=3256 pid=3446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.231000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:15:13.232000 audit[3447]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3447 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:13.232000 audit[3447]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda50e7e10 a2=0 a3=7ffda50e7dfc items=0 ppid=3256 pid=3447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.232000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 18:15:13.235000 audit[3449]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3449 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:13.235000 audit[3449]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff35b4a670 a2=0 a3=7fff35b4a65c items=0 ppid=3256 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.235000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 18:15:13.236000 audit[3450]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3450 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:13.236000 audit[3450]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe120e8990 a2=0 a3=7ffe120e897c items=0 ppid=3256 pid=3450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.236000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 18:15:13.239000 audit[3452]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3452 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:13.239000 audit[3452]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff908cd3d0 a2=0 a3=7fff908cd3bc items=0 ppid=3256 pid=3452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.239000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 18:15:13.242000 audit[3455]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3455 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:15:13.242000 audit[3455]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff39c74ca0 a2=0 a3=7fff39c74c8c items=0 ppid=3256 pid=3455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.242000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 18:15:13.246000 audit[3457]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3457 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 18:15:13.246000 audit[3457]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffd35203030 a2=0 a3=7ffd3520301c items=0 ppid=3256 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.246000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:13.247000 audit[3457]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3457 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 18:15:13.247000 audit[3457]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffd35203030 a2=0 a3=7ffd3520301c items=0 ppid=3256 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:13.247000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:13.297060 kubelet[3114]: I1212 18:15:13.296983 3114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bzdcg" podStartSLOduration=1.296966004 podStartE2EDuration="1.296966004s" podCreationTimestamp="2025-12-12 18:15:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:15:13.296913943 +0000 UTC m=+7.109437078" watchObservedRunningTime="2025-12-12 18:15:13.296966004 +0000 UTC m=+7.109489140" Dec 12 18:15:13.505748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3500794636.mount: Deactivated successfully. Dec 12 18:15:15.150936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1808547211.mount: Deactivated successfully. Dec 12 18:15:21.447856 containerd[1848]: time="2025-12-12T18:15:21.447806060Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:15:21.449152 containerd[1848]: time="2025-12-12T18:15:21.449129272Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 12 18:15:21.451471 containerd[1848]: time="2025-12-12T18:15:21.451437192Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:15:21.453742 containerd[1848]: time="2025-12-12T18:15:21.453713883Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:15:21.454161 containerd[1848]: time="2025-12-12T18:15:21.454131619Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 8.481928189s" Dec 12 18:15:21.454188 containerd[1848]: time="2025-12-12T18:15:21.454161676Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 12 18:15:21.458007 containerd[1848]: time="2025-12-12T18:15:21.457968602Z" level=info msg="CreateContainer within sandbox \"112724f19b8143a22ebb263b3856f10942439a24f55c1ff4612e66d05c93cbfb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 18:15:21.469269 containerd[1848]: time="2025-12-12T18:15:21.469225382Z" level=info msg="Container 0c5e141b4fdc71f2b2f41be01bcb61599c09f6d2daadc4908a8124fafe112e0b: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:15:21.478010 containerd[1848]: time="2025-12-12T18:15:21.477975583Z" level=info msg="CreateContainer within sandbox \"112724f19b8143a22ebb263b3856f10942439a24f55c1ff4612e66d05c93cbfb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0c5e141b4fdc71f2b2f41be01bcb61599c09f6d2daadc4908a8124fafe112e0b\"" Dec 12 18:15:21.478624 containerd[1848]: time="2025-12-12T18:15:21.478571118Z" level=info msg="StartContainer for \"0c5e141b4fdc71f2b2f41be01bcb61599c09f6d2daadc4908a8124fafe112e0b\"" Dec 12 18:15:21.480265 containerd[1848]: time="2025-12-12T18:15:21.480184015Z" level=info msg="connecting to shim 0c5e141b4fdc71f2b2f41be01bcb61599c09f6d2daadc4908a8124fafe112e0b" address="unix:///run/containerd/s/066131dd5b783385e45471e969109d2f93ff2be3bbc1d4b2293204d4e2d07319" protocol=ttrpc version=3 Dec 12 18:15:21.504905 systemd[1]: Started cri-containerd-0c5e141b4fdc71f2b2f41be01bcb61599c09f6d2daadc4908a8124fafe112e0b.scope - libcontainer container 0c5e141b4fdc71f2b2f41be01bcb61599c09f6d2daadc4908a8124fafe112e0b. Dec 12 18:15:21.512000 audit: BPF prog-id=146 op=LOAD Dec 12 18:15:21.514959 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 12 18:15:21.515001 kernel: audit: type=1334 audit(1765563321.512:510): prog-id=146 op=LOAD Dec 12 18:15:21.514000 audit: BPF prog-id=147 op=LOAD Dec 12 18:15:21.514000 audit[3466]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3291 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:21.518916 kernel: audit: type=1334 audit(1765563321.514:511): prog-id=147 op=LOAD Dec 12 18:15:21.518993 kernel: audit: type=1300 audit(1765563321.514:511): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3291 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:21.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063356531343162346664633731663262326634316265303162636236 Dec 12 18:15:21.522923 kernel: audit: type=1327 audit(1765563321.514:511): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063356531343162346664633731663262326634316265303162636236 Dec 12 18:15:21.514000 audit: BPF prog-id=147 op=UNLOAD Dec 12 18:15:21.525967 kernel: audit: type=1334 audit(1765563321.514:512): prog-id=147 op=UNLOAD Dec 12 18:15:21.514000 audit[3466]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3291 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:21.528092 kernel: audit: type=1300 audit(1765563321.514:512): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3291 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:21.530690 kernel: audit: type=1327 audit(1765563321.514:512): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063356531343162346664633731663262326634316265303162636236 Dec 12 18:15:21.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063356531343162346664633731663262326634316265303162636236 Dec 12 18:15:21.514000 audit: BPF prog-id=148 op=LOAD Dec 12 18:15:21.535862 kernel: audit: type=1334 audit(1765563321.514:513): prog-id=148 op=LOAD Dec 12 18:15:21.535930 kernel: audit: type=1300 audit(1765563321.514:513): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3291 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:21.514000 audit[3466]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3291 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:21.540063 containerd[1848]: time="2025-12-12T18:15:21.539954129Z" level=info msg="StartContainer for \"0c5e141b4fdc71f2b2f41be01bcb61599c09f6d2daadc4908a8124fafe112e0b\" returns successfully" Dec 12 18:15:21.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063356531343162346664633731663262326634316265303162636236 Dec 12 18:15:21.541789 kernel: audit: type=1327 audit(1765563321.514:513): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063356531343162346664633731663262326634316265303162636236 Dec 12 18:15:21.514000 audit: BPF prog-id=149 op=LOAD Dec 12 18:15:21.514000 audit[3466]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3291 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:21.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063356531343162346664633731663262326634316265303162636236 Dec 12 18:15:21.514000 audit: BPF prog-id=149 op=UNLOAD Dec 12 18:15:21.514000 audit[3466]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3291 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:21.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063356531343162346664633731663262326634316265303162636236 Dec 12 18:15:21.514000 audit: BPF prog-id=148 op=UNLOAD Dec 12 18:15:21.514000 audit[3466]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3291 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:21.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063356531343162346664633731663262326634316265303162636236 Dec 12 18:15:21.514000 audit: BPF prog-id=150 op=LOAD Dec 12 18:15:21.514000 audit[3466]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3291 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:21.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063356531343162346664633731663262326634316265303162636236 Dec 12 18:15:22.314681 kubelet[3114]: I1212 18:15:22.314490 3114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-lkdvm" podStartSLOduration=1.831676463 podStartE2EDuration="10.314472794s" podCreationTimestamp="2025-12-12 18:15:12 +0000 UTC" firstStartedPulling="2025-12-12 18:15:12.971902096 +0000 UTC m=+6.784425209" lastFinishedPulling="2025-12-12 18:15:21.454698426 +0000 UTC m=+15.267221540" observedRunningTime="2025-12-12 18:15:22.314383472 +0000 UTC m=+16.126906653" watchObservedRunningTime="2025-12-12 18:15:22.314472794 +0000 UTC m=+16.126995930" Dec 12 18:15:26.399770 sudo[2131]: pam_unix(sudo:session): session closed for user root Dec 12 18:15:26.398000 audit[2131]: USER_END pid=2131 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:15:26.398000 audit[2131]: CRED_DISP pid=2131 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:15:26.556866 sshd[2113]: Connection closed by 139.178.89.65 port 37172 Dec 12 18:15:26.557146 sshd-session[2107]: pam_unix(sshd:session): session closed for user core Dec 12 18:15:26.556000 audit[2107]: USER_END pid=2107 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:26.560021 kernel: kauditd_printk_skb: 14 callbacks suppressed Dec 12 18:15:26.560087 kernel: audit: type=1106 audit(1765563326.556:520): pid=2107 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:26.560514 systemd[1]: sshd@8-10.0.8.19:22-139.178.89.65:37172.service: Deactivated successfully. Dec 12 18:15:26.562193 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 18:15:26.562378 systemd[1]: session-9.scope: Consumed 5.836s CPU time, 236M memory peak. Dec 12 18:15:26.563292 systemd-logind[1824]: Session 9 logged out. Waiting for processes to exit. Dec 12 18:15:26.564080 systemd-logind[1824]: Removed session 9. Dec 12 18:15:26.556000 audit[2107]: CRED_DISP pid=2107 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:26.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.8.19:22-139.178.89.65:37172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:26.569792 kernel: audit: type=1104 audit(1765563326.556:521): pid=2107 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:26.569858 kernel: audit: type=1131 audit(1765563326.559:522): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.8.19:22-139.178.89.65:37172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:26.914000 audit[3580]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:26.919702 kernel: audit: type=1325 audit(1765563326.914:523): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:26.914000 audit[3580]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdeff31240 a2=0 a3=7ffdeff3122c items=0 ppid=3256 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:26.914000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:26.927478 kernel: audit: type=1300 audit(1765563326.914:523): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdeff31240 a2=0 a3=7ffdeff3122c items=0 ppid=3256 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:26.927558 kernel: audit: type=1327 audit(1765563326.914:523): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:26.922000 audit[3580]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:26.930189 kernel: audit: type=1325 audit(1765563326.922:524): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:26.922000 audit[3580]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdeff31240 a2=0 a3=0 items=0 ppid=3256 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:26.933927 kernel: audit: type=1300 audit(1765563326.922:524): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdeff31240 a2=0 a3=0 items=0 ppid=3256 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:26.922000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:26.938407 kernel: audit: type=1327 audit(1765563326.922:524): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:26.943000 audit[3582]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:26.943000 audit[3582]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdad8062f0 a2=0 a3=7ffdad8062dc items=0 ppid=3256 pid=3582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:26.943000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:26.948708 kernel: audit: type=1325 audit(1765563326.943:525): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:26.956000 audit[3582]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:26.956000 audit[3582]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdad8062f0 a2=0 a3=0 items=0 ppid=3256 pid=3582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:26.956000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:28.618000 audit[3584]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3584 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:28.618000 audit[3584]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffccad888f0 a2=0 a3=7ffccad888dc items=0 ppid=3256 pid=3584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:28.618000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:28.628000 audit[3584]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3584 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:28.628000 audit[3584]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffccad888f0 a2=0 a3=0 items=0 ppid=3256 pid=3584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:28.628000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:29.646000 audit[3586]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3586 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:29.646000 audit[3586]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff26906030 a2=0 a3=7fff2690601c items=0 ppid=3256 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:29.646000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:29.652000 audit[3586]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3586 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:29.652000 audit[3586]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff26906030 a2=0 a3=0 items=0 ppid=3256 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:29.652000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:30.451000 audit[3588]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3588 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:30.451000 audit[3588]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd16d5e9b0 a2=0 a3=7ffd16d5e99c items=0 ppid=3256 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:30.451000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:30.461000 audit[3588]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3588 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:30.461000 audit[3588]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd16d5e9b0 a2=0 a3=0 items=0 ppid=3256 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:30.461000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:30.489177 systemd[1]: Created slice kubepods-besteffort-pod3dec0c1d_e44a_48f3_9115_6634cfbee552.slice - libcontainer container kubepods-besteffort-pod3dec0c1d_e44a_48f3_9115_6634cfbee552.slice. Dec 12 18:15:30.502922 kubelet[3114]: I1212 18:15:30.502863 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3dec0c1d-e44a-48f3-9115-6634cfbee552-tigera-ca-bundle\") pod \"calico-typha-b6758dc95-g2wnn\" (UID: \"3dec0c1d-e44a-48f3-9115-6634cfbee552\") " pod="calico-system/calico-typha-b6758dc95-g2wnn" Dec 12 18:15:30.502922 kubelet[3114]: I1212 18:15:30.502911 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2w9c\" (UniqueName: \"kubernetes.io/projected/3dec0c1d-e44a-48f3-9115-6634cfbee552-kube-api-access-j2w9c\") pod \"calico-typha-b6758dc95-g2wnn\" (UID: \"3dec0c1d-e44a-48f3-9115-6634cfbee552\") " pod="calico-system/calico-typha-b6758dc95-g2wnn" Dec 12 18:15:30.502922 kubelet[3114]: I1212 18:15:30.502930 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3dec0c1d-e44a-48f3-9115-6634cfbee552-typha-certs\") pod \"calico-typha-b6758dc95-g2wnn\" (UID: \"3dec0c1d-e44a-48f3-9115-6634cfbee552\") " pod="calico-system/calico-typha-b6758dc95-g2wnn" Dec 12 18:15:30.677038 systemd[1]: Created slice kubepods-besteffort-pod58314650_e3ef_4713_a345_70981e07a7e3.slice - libcontainer container kubepods-besteffort-pod58314650_e3ef_4713_a345_70981e07a7e3.slice. Dec 12 18:15:30.704887 kubelet[3114]: I1212 18:15:30.704465 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/58314650-e3ef-4713-a345-70981e07a7e3-var-lib-calico\") pod \"calico-node-n7m4g\" (UID: \"58314650-e3ef-4713-a345-70981e07a7e3\") " pod="calico-system/calico-node-n7m4g" Dec 12 18:15:30.704887 kubelet[3114]: I1212 18:15:30.704519 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58314650-e3ef-4713-a345-70981e07a7e3-lib-modules\") pod \"calico-node-n7m4g\" (UID: \"58314650-e3ef-4713-a345-70981e07a7e3\") " pod="calico-system/calico-node-n7m4g" Dec 12 18:15:30.704887 kubelet[3114]: I1212 18:15:30.704536 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/58314650-e3ef-4713-a345-70981e07a7e3-policysync\") pod \"calico-node-n7m4g\" (UID: \"58314650-e3ef-4713-a345-70981e07a7e3\") " pod="calico-system/calico-node-n7m4g" Dec 12 18:15:30.704887 kubelet[3114]: I1212 18:15:30.704558 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/58314650-e3ef-4713-a345-70981e07a7e3-cni-bin-dir\") pod \"calico-node-n7m4g\" (UID: \"58314650-e3ef-4713-a345-70981e07a7e3\") " pod="calico-system/calico-node-n7m4g" Dec 12 18:15:30.704887 kubelet[3114]: I1212 18:15:30.704575 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/58314650-e3ef-4713-a345-70981e07a7e3-cni-net-dir\") pod \"calico-node-n7m4g\" (UID: \"58314650-e3ef-4713-a345-70981e07a7e3\") " pod="calico-system/calico-node-n7m4g" Dec 12 18:15:30.705089 kubelet[3114]: I1212 18:15:30.704591 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58314650-e3ef-4713-a345-70981e07a7e3-xtables-lock\") pod \"calico-node-n7m4g\" (UID: \"58314650-e3ef-4713-a345-70981e07a7e3\") " pod="calico-system/calico-node-n7m4g" Dec 12 18:15:30.705089 kubelet[3114]: I1212 18:15:30.704613 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58314650-e3ef-4713-a345-70981e07a7e3-tigera-ca-bundle\") pod \"calico-node-n7m4g\" (UID: \"58314650-e3ef-4713-a345-70981e07a7e3\") " pod="calico-system/calico-node-n7m4g" Dec 12 18:15:30.705089 kubelet[3114]: I1212 18:15:30.704631 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pbjm\" (UniqueName: \"kubernetes.io/projected/58314650-e3ef-4713-a345-70981e07a7e3-kube-api-access-8pbjm\") pod \"calico-node-n7m4g\" (UID: \"58314650-e3ef-4713-a345-70981e07a7e3\") " pod="calico-system/calico-node-n7m4g" Dec 12 18:15:30.705089 kubelet[3114]: I1212 18:15:30.704651 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/58314650-e3ef-4713-a345-70981e07a7e3-cni-log-dir\") pod \"calico-node-n7m4g\" (UID: \"58314650-e3ef-4713-a345-70981e07a7e3\") " pod="calico-system/calico-node-n7m4g" Dec 12 18:15:30.705089 kubelet[3114]: I1212 18:15:30.704680 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/58314650-e3ef-4713-a345-70981e07a7e3-flexvol-driver-host\") pod \"calico-node-n7m4g\" (UID: \"58314650-e3ef-4713-a345-70981e07a7e3\") " pod="calico-system/calico-node-n7m4g" Dec 12 18:15:30.705197 kubelet[3114]: I1212 18:15:30.704696 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/58314650-e3ef-4713-a345-70981e07a7e3-node-certs\") pod \"calico-node-n7m4g\" (UID: \"58314650-e3ef-4713-a345-70981e07a7e3\") " pod="calico-system/calico-node-n7m4g" Dec 12 18:15:30.705197 kubelet[3114]: I1212 18:15:30.704711 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/58314650-e3ef-4713-a345-70981e07a7e3-var-run-calico\") pod \"calico-node-n7m4g\" (UID: \"58314650-e3ef-4713-a345-70981e07a7e3\") " pod="calico-system/calico-node-n7m4g" Dec 12 18:15:30.793769 containerd[1848]: time="2025-12-12T18:15:30.793700951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b6758dc95-g2wnn,Uid:3dec0c1d-e44a-48f3-9115-6634cfbee552,Namespace:calico-system,Attempt:0,}" Dec 12 18:15:30.807289 kubelet[3114]: E1212 18:15:30.807255 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.807289 kubelet[3114]: W1212 18:15:30.807276 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.807289 kubelet[3114]: E1212 18:15:30.807299 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.809833 kubelet[3114]: E1212 18:15:30.809756 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.809833 kubelet[3114]: W1212 18:15:30.809776 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.809833 kubelet[3114]: E1212 18:15:30.809795 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.813821 kubelet[3114]: E1212 18:15:30.813795 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.813821 kubelet[3114]: W1212 18:15:30.813814 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.813924 kubelet[3114]: E1212 18:15:30.813832 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.820328 containerd[1848]: time="2025-12-12T18:15:30.820277500Z" level=info msg="connecting to shim 09ea10e128b4ad126e38f1804d55059d36a3ea6aeb0d79087be584b3c63322ad" address="unix:///run/containerd/s/9259eb65ec819e8af6a309236abcfa8ff355508658ca3202b90838706d4ecafb" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:15:30.860585 kubelet[3114]: E1212 18:15:30.860499 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9klxk" podUID="4cc6cae7-6092-4840-b2c3-065b3bb220f3" Dec 12 18:15:30.867053 systemd[1]: Started cri-containerd-09ea10e128b4ad126e38f1804d55059d36a3ea6aeb0d79087be584b3c63322ad.scope - libcontainer container 09ea10e128b4ad126e38f1804d55059d36a3ea6aeb0d79087be584b3c63322ad. Dec 12 18:15:30.879000 audit: BPF prog-id=151 op=LOAD Dec 12 18:15:30.880000 audit: BPF prog-id=152 op=LOAD Dec 12 18:15:30.880000 audit[3615]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3604 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:30.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039656131306531323862346164313236653338663138303464353530 Dec 12 18:15:30.880000 audit: BPF prog-id=152 op=UNLOAD Dec 12 18:15:30.880000 audit[3615]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3604 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:30.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039656131306531323862346164313236653338663138303464353530 Dec 12 18:15:30.880000 audit: BPF prog-id=153 op=LOAD Dec 12 18:15:30.880000 audit[3615]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3604 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:30.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039656131306531323862346164313236653338663138303464353530 Dec 12 18:15:30.880000 audit: BPF prog-id=154 op=LOAD Dec 12 18:15:30.880000 audit[3615]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3604 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:30.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039656131306531323862346164313236653338663138303464353530 Dec 12 18:15:30.880000 audit: BPF prog-id=154 op=UNLOAD Dec 12 18:15:30.880000 audit[3615]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3604 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:30.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039656131306531323862346164313236653338663138303464353530 Dec 12 18:15:30.880000 audit: BPF prog-id=153 op=UNLOAD Dec 12 18:15:30.880000 audit[3615]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3604 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:30.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039656131306531323862346164313236653338663138303464353530 Dec 12 18:15:30.880000 audit: BPF prog-id=155 op=LOAD Dec 12 18:15:30.880000 audit[3615]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3604 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:30.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039656131306531323862346164313236653338663138303464353530 Dec 12 18:15:30.892961 kubelet[3114]: E1212 18:15:30.892909 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.892961 kubelet[3114]: W1212 18:15:30.892944 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.892961 kubelet[3114]: E1212 18:15:30.892971 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.893306 kubelet[3114]: E1212 18:15:30.893269 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.893306 kubelet[3114]: W1212 18:15:30.893299 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.893351 kubelet[3114]: E1212 18:15:30.893313 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.893602 kubelet[3114]: E1212 18:15:30.893552 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.893602 kubelet[3114]: W1212 18:15:30.893566 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.893602 kubelet[3114]: E1212 18:15:30.893575 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.893873 kubelet[3114]: E1212 18:15:30.893860 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.893873 kubelet[3114]: W1212 18:15:30.893871 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.893915 kubelet[3114]: E1212 18:15:30.893881 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.894212 kubelet[3114]: E1212 18:15:30.894159 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.894212 kubelet[3114]: W1212 18:15:30.894185 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.894212 kubelet[3114]: E1212 18:15:30.894207 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.894418 kubelet[3114]: E1212 18:15:30.894404 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.894418 kubelet[3114]: W1212 18:15:30.894414 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.894466 kubelet[3114]: E1212 18:15:30.894422 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.894571 kubelet[3114]: E1212 18:15:30.894559 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.894571 kubelet[3114]: W1212 18:15:30.894568 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.894616 kubelet[3114]: E1212 18:15:30.894574 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.894727 kubelet[3114]: E1212 18:15:30.894715 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.894727 kubelet[3114]: W1212 18:15:30.894724 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.894773 kubelet[3114]: E1212 18:15:30.894730 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.894883 kubelet[3114]: E1212 18:15:30.894872 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.894883 kubelet[3114]: W1212 18:15:30.894880 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.894942 kubelet[3114]: E1212 18:15:30.894887 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.895016 kubelet[3114]: E1212 18:15:30.895004 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.895016 kubelet[3114]: W1212 18:15:30.895014 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.895074 kubelet[3114]: E1212 18:15:30.895020 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.895153 kubelet[3114]: E1212 18:15:30.895142 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.895153 kubelet[3114]: W1212 18:15:30.895151 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.895208 kubelet[3114]: E1212 18:15:30.895157 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.895285 kubelet[3114]: E1212 18:15:30.895274 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.895285 kubelet[3114]: W1212 18:15:30.895283 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.895333 kubelet[3114]: E1212 18:15:30.895289 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.895469 kubelet[3114]: E1212 18:15:30.895450 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.895469 kubelet[3114]: W1212 18:15:30.895460 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.895469 kubelet[3114]: E1212 18:15:30.895466 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.895609 kubelet[3114]: E1212 18:15:30.895598 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.895609 kubelet[3114]: W1212 18:15:30.895606 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.895651 kubelet[3114]: E1212 18:15:30.895612 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.895754 kubelet[3114]: E1212 18:15:30.895743 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.895754 kubelet[3114]: W1212 18:15:30.895751 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.895806 kubelet[3114]: E1212 18:15:30.895758 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.895930 kubelet[3114]: E1212 18:15:30.895919 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.895930 kubelet[3114]: W1212 18:15:30.895927 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.895976 kubelet[3114]: E1212 18:15:30.895933 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.896065 kubelet[3114]: E1212 18:15:30.896054 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.896065 kubelet[3114]: W1212 18:15:30.896063 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.896114 kubelet[3114]: E1212 18:15:30.896069 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.896223 kubelet[3114]: E1212 18:15:30.896211 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.896223 kubelet[3114]: W1212 18:15:30.896219 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.896267 kubelet[3114]: E1212 18:15:30.896225 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.896347 kubelet[3114]: E1212 18:15:30.896336 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.896347 kubelet[3114]: W1212 18:15:30.896344 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.896468 kubelet[3114]: E1212 18:15:30.896350 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.896505 kubelet[3114]: E1212 18:15:30.896493 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.896505 kubelet[3114]: W1212 18:15:30.896501 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.896546 kubelet[3114]: E1212 18:15:30.896508 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.907782 kubelet[3114]: E1212 18:15:30.907718 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.907782 kubelet[3114]: W1212 18:15:30.907742 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.907782 kubelet[3114]: E1212 18:15:30.907762 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.907782 kubelet[3114]: I1212 18:15:30.907787 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cc6cae7-6092-4840-b2c3-065b3bb220f3-kubelet-dir\") pod \"csi-node-driver-9klxk\" (UID: \"4cc6cae7-6092-4840-b2c3-065b3bb220f3\") " pod="calico-system/csi-node-driver-9klxk" Dec 12 18:15:30.907970 kubelet[3114]: E1212 18:15:30.907959 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.907970 kubelet[3114]: W1212 18:15:30.907966 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.908008 kubelet[3114]: E1212 18:15:30.907973 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.908008 kubelet[3114]: I1212 18:15:30.907992 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4cc6cae7-6092-4840-b2c3-065b3bb220f3-socket-dir\") pod \"csi-node-driver-9klxk\" (UID: \"4cc6cae7-6092-4840-b2c3-065b3bb220f3\") " pod="calico-system/csi-node-driver-9klxk" Dec 12 18:15:30.908773 kubelet[3114]: E1212 18:15:30.908758 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.908808 kubelet[3114]: W1212 18:15:30.908773 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.908808 kubelet[3114]: E1212 18:15:30.908785 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.908859 kubelet[3114]: I1212 18:15:30.908809 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsbhq\" (UniqueName: \"kubernetes.io/projected/4cc6cae7-6092-4840-b2c3-065b3bb220f3-kube-api-access-nsbhq\") pod \"csi-node-driver-9klxk\" (UID: \"4cc6cae7-6092-4840-b2c3-065b3bb220f3\") " pod="calico-system/csi-node-driver-9klxk" Dec 12 18:15:30.909013 kubelet[3114]: E1212 18:15:30.908999 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.909049 kubelet[3114]: W1212 18:15:30.909013 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.909049 kubelet[3114]: E1212 18:15:30.909024 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.909169 kubelet[3114]: E1212 18:15:30.909158 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.909169 kubelet[3114]: W1212 18:15:30.909166 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.909210 kubelet[3114]: E1212 18:15:30.909172 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.909317 kubelet[3114]: E1212 18:15:30.909308 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.909317 kubelet[3114]: W1212 18:15:30.909316 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.909360 kubelet[3114]: E1212 18:15:30.909322 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.909443 kubelet[3114]: E1212 18:15:30.909435 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.909443 kubelet[3114]: W1212 18:15:30.909442 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.909486 kubelet[3114]: E1212 18:15:30.909448 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.909486 kubelet[3114]: I1212 18:15:30.909470 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4cc6cae7-6092-4840-b2c3-065b3bb220f3-varrun\") pod \"csi-node-driver-9klxk\" (UID: \"4cc6cae7-6092-4840-b2c3-065b3bb220f3\") " pod="calico-system/csi-node-driver-9klxk" Dec 12 18:15:30.909794 kubelet[3114]: E1212 18:15:30.909767 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.909822 kubelet[3114]: W1212 18:15:30.909795 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.909822 kubelet[3114]: E1212 18:15:30.909817 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.909999 kubelet[3114]: E1212 18:15:30.909988 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.909999 kubelet[3114]: W1212 18:15:30.909997 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.910050 kubelet[3114]: E1212 18:15:30.910005 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.910182 kubelet[3114]: E1212 18:15:30.910170 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.910203 kubelet[3114]: W1212 18:15:30.910183 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.910203 kubelet[3114]: E1212 18:15:30.910190 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.910314 kubelet[3114]: E1212 18:15:30.910305 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.910314 kubelet[3114]: W1212 18:15:30.910313 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.910359 kubelet[3114]: E1212 18:15:30.910320 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.910479 kubelet[3114]: E1212 18:15:30.910466 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.910479 kubelet[3114]: W1212 18:15:30.910478 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.910531 kubelet[3114]: E1212 18:15:30.910484 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.910531 kubelet[3114]: I1212 18:15:30.910500 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4cc6cae7-6092-4840-b2c3-065b3bb220f3-registration-dir\") pod \"csi-node-driver-9klxk\" (UID: \"4cc6cae7-6092-4840-b2c3-065b3bb220f3\") " pod="calico-system/csi-node-driver-9klxk" Dec 12 18:15:30.910681 kubelet[3114]: E1212 18:15:30.910670 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.910708 kubelet[3114]: W1212 18:15:30.910681 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.910708 kubelet[3114]: E1212 18:15:30.910690 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.910842 kubelet[3114]: E1212 18:15:30.910833 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.910842 kubelet[3114]: W1212 18:15:30.910841 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.910881 kubelet[3114]: E1212 18:15:30.910848 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.910970 kubelet[3114]: E1212 18:15:30.910962 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:30.910970 kubelet[3114]: W1212 18:15:30.910969 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:30.911017 kubelet[3114]: E1212 18:15:30.910975 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:30.914666 containerd[1848]: time="2025-12-12T18:15:30.914628803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b6758dc95-g2wnn,Uid:3dec0c1d-e44a-48f3-9115-6634cfbee552,Namespace:calico-system,Attempt:0,} returns sandbox id \"09ea10e128b4ad126e38f1804d55059d36a3ea6aeb0d79087be584b3c63322ad\"" Dec 12 18:15:30.915875 containerd[1848]: time="2025-12-12T18:15:30.915858256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 18:15:30.981207 containerd[1848]: time="2025-12-12T18:15:30.980537016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n7m4g,Uid:58314650-e3ef-4713-a345-70981e07a7e3,Namespace:calico-system,Attempt:0,}" Dec 12 18:15:31.011971 kubelet[3114]: E1212 18:15:31.011934 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.011971 kubelet[3114]: W1212 18:15:31.011962 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.011971 kubelet[3114]: E1212 18:15:31.011985 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.012218 kubelet[3114]: E1212 18:15:31.012188 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.012218 kubelet[3114]: W1212 18:15:31.012195 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.012218 kubelet[3114]: E1212 18:15:31.012202 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.012481 kubelet[3114]: E1212 18:15:31.012466 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.012513 kubelet[3114]: W1212 18:15:31.012482 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.012513 kubelet[3114]: E1212 18:15:31.012494 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.012762 containerd[1848]: time="2025-12-12T18:15:31.012722862Z" level=info msg="connecting to shim 95c9c874312e99eac1aa1162fb76b72e58df4120f79f6d6d589cbd8cfef08201" address="unix:///run/containerd/s/d1ec697bb8577309eb8a392996774d7fe675789b132343e0df0d311cfc34031f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:15:31.012832 kubelet[3114]: E1212 18:15:31.012806 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.012857 kubelet[3114]: W1212 18:15:31.012830 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.012857 kubelet[3114]: E1212 18:15:31.012849 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.013067 kubelet[3114]: E1212 18:15:31.013055 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.013067 kubelet[3114]: W1212 18:15:31.013064 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.013122 kubelet[3114]: E1212 18:15:31.013072 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.013255 kubelet[3114]: E1212 18:15:31.013244 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.013255 kubelet[3114]: W1212 18:15:31.013253 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.013299 kubelet[3114]: E1212 18:15:31.013260 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.013485 kubelet[3114]: E1212 18:15:31.013474 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.013485 kubelet[3114]: W1212 18:15:31.013483 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.013535 kubelet[3114]: E1212 18:15:31.013490 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.013647 kubelet[3114]: E1212 18:15:31.013635 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.013647 kubelet[3114]: W1212 18:15:31.013644 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.013729 kubelet[3114]: E1212 18:15:31.013650 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.013824 kubelet[3114]: E1212 18:15:31.013812 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.013824 kubelet[3114]: W1212 18:15:31.013820 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.013867 kubelet[3114]: E1212 18:15:31.013826 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.014010 kubelet[3114]: E1212 18:15:31.013997 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.014010 kubelet[3114]: W1212 18:15:31.014006 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.014058 kubelet[3114]: E1212 18:15:31.014012 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.014169 kubelet[3114]: E1212 18:15:31.014157 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.014169 kubelet[3114]: W1212 18:15:31.014166 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.014228 kubelet[3114]: E1212 18:15:31.014173 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.014366 kubelet[3114]: E1212 18:15:31.014355 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.014366 kubelet[3114]: W1212 18:15:31.014363 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.014414 kubelet[3114]: E1212 18:15:31.014370 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.014562 kubelet[3114]: E1212 18:15:31.014551 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.014562 kubelet[3114]: W1212 18:15:31.014560 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.014603 kubelet[3114]: E1212 18:15:31.014566 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.014820 kubelet[3114]: E1212 18:15:31.014804 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.014843 kubelet[3114]: W1212 18:15:31.014822 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.014843 kubelet[3114]: E1212 18:15:31.014837 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.015051 kubelet[3114]: E1212 18:15:31.015039 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.015072 kubelet[3114]: W1212 18:15:31.015051 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.015072 kubelet[3114]: E1212 18:15:31.015062 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.015218 kubelet[3114]: E1212 18:15:31.015209 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.015218 kubelet[3114]: W1212 18:15:31.015218 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.015268 kubelet[3114]: E1212 18:15:31.015224 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.015406 kubelet[3114]: E1212 18:15:31.015396 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.015406 kubelet[3114]: W1212 18:15:31.015405 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.015456 kubelet[3114]: E1212 18:15:31.015420 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.015584 kubelet[3114]: E1212 18:15:31.015575 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.015606 kubelet[3114]: W1212 18:15:31.015586 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.015606 kubelet[3114]: E1212 18:15:31.015593 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.015751 kubelet[3114]: E1212 18:15:31.015742 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.015751 kubelet[3114]: W1212 18:15:31.015751 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.015799 kubelet[3114]: E1212 18:15:31.015758 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.015918 kubelet[3114]: E1212 18:15:31.015910 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.015940 kubelet[3114]: W1212 18:15:31.015918 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.015940 kubelet[3114]: E1212 18:15:31.015924 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.016141 kubelet[3114]: E1212 18:15:31.016131 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.016141 kubelet[3114]: W1212 18:15:31.016140 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.016184 kubelet[3114]: E1212 18:15:31.016147 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.016316 kubelet[3114]: E1212 18:15:31.016307 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.016341 kubelet[3114]: W1212 18:15:31.016315 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.016341 kubelet[3114]: E1212 18:15:31.016322 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.016514 kubelet[3114]: E1212 18:15:31.016505 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.016514 kubelet[3114]: W1212 18:15:31.016513 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.016558 kubelet[3114]: E1212 18:15:31.016520 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.017173 kubelet[3114]: E1212 18:15:31.016945 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.017173 kubelet[3114]: W1212 18:15:31.016959 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.017173 kubelet[3114]: E1212 18:15:31.016971 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.017338 kubelet[3114]: E1212 18:15:31.017329 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.017400 kubelet[3114]: W1212 18:15:31.017393 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.017436 kubelet[3114]: E1212 18:15:31.017430 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.026005 kubelet[3114]: E1212 18:15:31.025972 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:31.026163 kubelet[3114]: W1212 18:15:31.026152 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:31.026234 kubelet[3114]: E1212 18:15:31.026224 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:31.050940 systemd[1]: Started cri-containerd-95c9c874312e99eac1aa1162fb76b72e58df4120f79f6d6d589cbd8cfef08201.scope - libcontainer container 95c9c874312e99eac1aa1162fb76b72e58df4120f79f6d6d589cbd8cfef08201. Dec 12 18:15:31.060000 audit: BPF prog-id=156 op=LOAD Dec 12 18:15:31.060000 audit: BPF prog-id=157 op=LOAD Dec 12 18:15:31.060000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3695 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:31.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935633963383734333132653939656163316161313136326662373662 Dec 12 18:15:31.060000 audit: BPF prog-id=157 op=UNLOAD Dec 12 18:15:31.060000 audit[3731]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3695 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:31.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935633963383734333132653939656163316161313136326662373662 Dec 12 18:15:31.060000 audit: BPF prog-id=158 op=LOAD Dec 12 18:15:31.060000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3695 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:31.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935633963383734333132653939656163316161313136326662373662 Dec 12 18:15:31.060000 audit: BPF prog-id=159 op=LOAD Dec 12 18:15:31.060000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3695 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:31.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935633963383734333132653939656163316161313136326662373662 Dec 12 18:15:31.060000 audit: BPF prog-id=159 op=UNLOAD Dec 12 18:15:31.060000 audit[3731]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3695 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:31.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935633963383734333132653939656163316161313136326662373662 Dec 12 18:15:31.060000 audit: BPF prog-id=158 op=UNLOAD Dec 12 18:15:31.060000 audit[3731]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3695 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:31.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935633963383734333132653939656163316161313136326662373662 Dec 12 18:15:31.060000 audit: BPF prog-id=160 op=LOAD Dec 12 18:15:31.060000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3695 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:31.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935633963383734333132653939656163316161313136326662373662 Dec 12 18:15:31.078174 containerd[1848]: time="2025-12-12T18:15:31.078107641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n7m4g,Uid:58314650-e3ef-4713-a345-70981e07a7e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"95c9c874312e99eac1aa1162fb76b72e58df4120f79f6d6d589cbd8cfef08201\"" Dec 12 18:15:31.482000 audit[3759]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3759 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:31.482000 audit[3759]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff5062a820 a2=0 a3=7fff5062a80c items=0 ppid=3256 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:31.482000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:31.495000 audit[3759]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3759 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:31.495000 audit[3759]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff5062a820 a2=0 a3=0 items=0 ppid=3256 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:31.495000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:33.265436 kubelet[3114]: E1212 18:15:33.265235 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9klxk" podUID="4cc6cae7-6092-4840-b2c3-065b3bb220f3" Dec 12 18:15:33.921701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2050935222.mount: Deactivated successfully. Dec 12 18:15:34.268065 containerd[1848]: time="2025-12-12T18:15:34.267939145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:15:34.269618 containerd[1848]: time="2025-12-12T18:15:34.269570574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35230631" Dec 12 18:15:34.271538 containerd[1848]: time="2025-12-12T18:15:34.271457258Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:15:34.274348 containerd[1848]: time="2025-12-12T18:15:34.274278963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:15:34.274935 containerd[1848]: time="2025-12-12T18:15:34.274892382Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.359005089s" Dec 12 18:15:34.274935 containerd[1848]: time="2025-12-12T18:15:34.274924767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 12 18:15:34.275627 containerd[1848]: time="2025-12-12T18:15:34.275605757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 18:15:34.286036 containerd[1848]: time="2025-12-12T18:15:34.285963142Z" level=info msg="CreateContainer within sandbox \"09ea10e128b4ad126e38f1804d55059d36a3ea6aeb0d79087be584b3c63322ad\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 18:15:34.297571 containerd[1848]: time="2025-12-12T18:15:34.297509546Z" level=info msg="Container 53da3dc2a9cb3ff2b5de0e487c2dbc4d65a4be900484d733edda315e24fab6d6: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:15:34.306934 containerd[1848]: time="2025-12-12T18:15:34.306873266Z" level=info msg="CreateContainer within sandbox \"09ea10e128b4ad126e38f1804d55059d36a3ea6aeb0d79087be584b3c63322ad\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"53da3dc2a9cb3ff2b5de0e487c2dbc4d65a4be900484d733edda315e24fab6d6\"" Dec 12 18:15:34.307383 containerd[1848]: time="2025-12-12T18:15:34.307352381Z" level=info msg="StartContainer for \"53da3dc2a9cb3ff2b5de0e487c2dbc4d65a4be900484d733edda315e24fab6d6\"" Dec 12 18:15:34.308321 containerd[1848]: time="2025-12-12T18:15:34.308284918Z" level=info msg="connecting to shim 53da3dc2a9cb3ff2b5de0e487c2dbc4d65a4be900484d733edda315e24fab6d6" address="unix:///run/containerd/s/9259eb65ec819e8af6a309236abcfa8ff355508658ca3202b90838706d4ecafb" protocol=ttrpc version=3 Dec 12 18:15:34.333946 systemd[1]: Started cri-containerd-53da3dc2a9cb3ff2b5de0e487c2dbc4d65a4be900484d733edda315e24fab6d6.scope - libcontainer container 53da3dc2a9cb3ff2b5de0e487c2dbc4d65a4be900484d733edda315e24fab6d6. Dec 12 18:15:34.343000 audit: BPF prog-id=161 op=LOAD Dec 12 18:15:34.346095 kernel: kauditd_printk_skb: 73 callbacks suppressed Dec 12 18:15:34.346171 kernel: audit: type=1334 audit(1765563334.343:551): prog-id=161 op=LOAD Dec 12 18:15:34.344000 audit: BPF prog-id=162 op=LOAD Dec 12 18:15:34.348961 kernel: audit: type=1334 audit(1765563334.344:552): prog-id=162 op=LOAD Dec 12 18:15:34.344000 audit[3770]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3604 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:34.351655 kernel: audit: type=1300 audit(1765563334.344:552): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3604 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:34.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533646133646332613963623366663262356465306534383763326462 Dec 12 18:15:34.356692 kernel: audit: type=1327 audit(1765563334.344:552): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533646133646332613963623366663262356465306534383763326462 Dec 12 18:15:34.344000 audit: BPF prog-id=162 op=UNLOAD Dec 12 18:15:34.360260 kernel: audit: type=1334 audit(1765563334.344:553): prog-id=162 op=UNLOAD Dec 12 18:15:34.344000 audit[3770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3604 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:34.363113 kernel: audit: type=1300 audit(1765563334.344:553): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3604 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:34.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533646133646332613963623366663262356465306534383763326462 Dec 12 18:15:34.367705 kernel: audit: type=1327 audit(1765563334.344:553): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533646133646332613963623366663262356465306534383763326462 Dec 12 18:15:34.344000 audit: BPF prog-id=163 op=LOAD Dec 12 18:15:34.371049 kernel: audit: type=1334 audit(1765563334.344:554): prog-id=163 op=LOAD Dec 12 18:15:34.344000 audit[3770]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3604 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:34.373609 kernel: audit: type=1300 audit(1765563334.344:554): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3604 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:34.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533646133646332613963623366663262356465306534383763326462 Dec 12 18:15:34.378301 kernel: audit: type=1327 audit(1765563334.344:554): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533646133646332613963623366663262356465306534383763326462 Dec 12 18:15:34.344000 audit: BPF prog-id=164 op=LOAD Dec 12 18:15:34.344000 audit[3770]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3604 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:34.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533646133646332613963623366663262356465306534383763326462 Dec 12 18:15:34.344000 audit: BPF prog-id=164 op=UNLOAD Dec 12 18:15:34.344000 audit[3770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3604 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:34.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533646133646332613963623366663262356465306534383763326462 Dec 12 18:15:34.344000 audit: BPF prog-id=163 op=UNLOAD Dec 12 18:15:34.344000 audit[3770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3604 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:34.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533646133646332613963623366663262356465306534383763326462 Dec 12 18:15:34.344000 audit: BPF prog-id=165 op=LOAD Dec 12 18:15:34.344000 audit[3770]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3604 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:34.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533646133646332613963623366663262356465306534383763326462 Dec 12 18:15:34.386781 containerd[1848]: time="2025-12-12T18:15:34.386744469Z" level=info msg="StartContainer for \"53da3dc2a9cb3ff2b5de0e487c2dbc4d65a4be900484d733edda315e24fab6d6\" returns successfully" Dec 12 18:15:35.265452 kubelet[3114]: E1212 18:15:35.265364 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9klxk" podUID="4cc6cae7-6092-4840-b2c3-065b3bb220f3" Dec 12 18:15:35.338467 kubelet[3114]: I1212 18:15:35.338392 3114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b6758dc95-g2wnn" podStartSLOduration=1.978442308 podStartE2EDuration="5.338373747s" podCreationTimestamp="2025-12-12 18:15:30 +0000 UTC" firstStartedPulling="2025-12-12 18:15:30.915565415 +0000 UTC m=+24.728088542" lastFinishedPulling="2025-12-12 18:15:34.275496867 +0000 UTC m=+28.088019981" observedRunningTime="2025-12-12 18:15:35.337636829 +0000 UTC m=+29.150159949" watchObservedRunningTime="2025-12-12 18:15:35.338373747 +0000 UTC m=+29.150896888" Dec 12 18:15:35.360000 audit[3817]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3817 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:35.360000 audit[3817]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc21242f10 a2=0 a3=7ffc21242efc items=0 ppid=3256 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:35.360000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:35.366000 audit[3817]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3817 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:35.366000 audit[3817]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc21242f10 a2=0 a3=7ffc21242efc items=0 ppid=3256 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:35.366000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:35.423292 kubelet[3114]: E1212 18:15:35.423222 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.423292 kubelet[3114]: W1212 18:15:35.423249 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.423292 kubelet[3114]: E1212 18:15:35.423270 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.423475 kubelet[3114]: E1212 18:15:35.423434 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.423475 kubelet[3114]: W1212 18:15:35.423441 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.423475 kubelet[3114]: E1212 18:15:35.423447 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.423619 kubelet[3114]: E1212 18:15:35.423607 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.423619 kubelet[3114]: W1212 18:15:35.423614 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.423680 kubelet[3114]: E1212 18:15:35.423620 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.423774 kubelet[3114]: E1212 18:15:35.423763 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.423774 kubelet[3114]: W1212 18:15:35.423771 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.423819 kubelet[3114]: E1212 18:15:35.423777 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.423911 kubelet[3114]: E1212 18:15:35.423900 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.423911 kubelet[3114]: W1212 18:15:35.423908 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.423958 kubelet[3114]: E1212 18:15:35.423914 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.424031 kubelet[3114]: E1212 18:15:35.424021 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.424031 kubelet[3114]: W1212 18:15:35.424028 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.424077 kubelet[3114]: E1212 18:15:35.424033 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.424172 kubelet[3114]: E1212 18:15:35.424162 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.424172 kubelet[3114]: W1212 18:15:35.424169 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.424218 kubelet[3114]: E1212 18:15:35.424175 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.424295 kubelet[3114]: E1212 18:15:35.424285 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.424295 kubelet[3114]: W1212 18:15:35.424292 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.424341 kubelet[3114]: E1212 18:15:35.424298 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.424431 kubelet[3114]: E1212 18:15:35.424420 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.424431 kubelet[3114]: W1212 18:15:35.424428 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.424479 kubelet[3114]: E1212 18:15:35.424434 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.424558 kubelet[3114]: E1212 18:15:35.424548 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.424558 kubelet[3114]: W1212 18:15:35.424555 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.424603 kubelet[3114]: E1212 18:15:35.424561 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.424693 kubelet[3114]: E1212 18:15:35.424682 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.424693 kubelet[3114]: W1212 18:15:35.424689 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.424744 kubelet[3114]: E1212 18:15:35.424695 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.424811 kubelet[3114]: E1212 18:15:35.424802 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.424811 kubelet[3114]: W1212 18:15:35.424810 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.424855 kubelet[3114]: E1212 18:15:35.424816 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.424942 kubelet[3114]: E1212 18:15:35.424934 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.424942 kubelet[3114]: W1212 18:15:35.424941 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.424989 kubelet[3114]: E1212 18:15:35.424947 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.425065 kubelet[3114]: E1212 18:15:35.425057 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.425065 kubelet[3114]: W1212 18:15:35.425064 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.425122 kubelet[3114]: E1212 18:15:35.425070 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.425186 kubelet[3114]: E1212 18:15:35.425177 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.425186 kubelet[3114]: W1212 18:15:35.425184 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.425233 kubelet[3114]: E1212 18:15:35.425190 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.443750 kubelet[3114]: E1212 18:15:35.443698 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.443750 kubelet[3114]: W1212 18:15:35.443722 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.443750 kubelet[3114]: E1212 18:15:35.443741 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.443973 kubelet[3114]: E1212 18:15:35.443961 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.443973 kubelet[3114]: W1212 18:15:35.443968 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.443973 kubelet[3114]: E1212 18:15:35.443975 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.444187 kubelet[3114]: E1212 18:15:35.444167 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.444187 kubelet[3114]: W1212 18:15:35.444175 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.444187 kubelet[3114]: E1212 18:15:35.444181 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.444438 kubelet[3114]: E1212 18:15:35.444413 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.444438 kubelet[3114]: W1212 18:15:35.444430 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.444494 kubelet[3114]: E1212 18:15:35.444443 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.444614 kubelet[3114]: E1212 18:15:35.444597 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.444614 kubelet[3114]: W1212 18:15:35.444606 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.444614 kubelet[3114]: E1212 18:15:35.444613 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.444772 kubelet[3114]: E1212 18:15:35.444751 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.444772 kubelet[3114]: W1212 18:15:35.444760 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.444772 kubelet[3114]: E1212 18:15:35.444766 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.444935 kubelet[3114]: E1212 18:15:35.444924 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.444935 kubelet[3114]: W1212 18:15:35.444932 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.444988 kubelet[3114]: E1212 18:15:35.444940 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.445216 kubelet[3114]: E1212 18:15:35.445194 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.445216 kubelet[3114]: W1212 18:15:35.445209 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.445276 kubelet[3114]: E1212 18:15:35.445220 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.445398 kubelet[3114]: E1212 18:15:35.445387 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.445398 kubelet[3114]: W1212 18:15:35.445395 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.445446 kubelet[3114]: E1212 18:15:35.445402 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.445563 kubelet[3114]: E1212 18:15:35.445552 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.445563 kubelet[3114]: W1212 18:15:35.445560 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.445614 kubelet[3114]: E1212 18:15:35.445566 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.445718 kubelet[3114]: E1212 18:15:35.445706 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.445718 kubelet[3114]: W1212 18:15:35.445714 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.445772 kubelet[3114]: E1212 18:15:35.445720 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.445886 kubelet[3114]: E1212 18:15:35.445875 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.445886 kubelet[3114]: W1212 18:15:35.445882 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.445937 kubelet[3114]: E1212 18:15:35.445888 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.446089 kubelet[3114]: E1212 18:15:35.446078 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.446089 kubelet[3114]: W1212 18:15:35.446086 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.446139 kubelet[3114]: E1212 18:15:35.446092 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.446862 kubelet[3114]: E1212 18:15:35.446801 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.446862 kubelet[3114]: W1212 18:15:35.446827 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.447833 kubelet[3114]: E1212 18:15:35.447792 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.447998 kubelet[3114]: E1212 18:15:35.447975 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.447998 kubelet[3114]: W1212 18:15:35.447988 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.448069 kubelet[3114]: E1212 18:15:35.448004 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.448728 kubelet[3114]: E1212 18:15:35.448681 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.448728 kubelet[3114]: W1212 18:15:35.448701 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.448728 kubelet[3114]: E1212 18:15:35.448714 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.449074 kubelet[3114]: E1212 18:15:35.449033 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.449074 kubelet[3114]: W1212 18:15:35.449054 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.449074 kubelet[3114]: E1212 18:15:35.449068 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.449308 kubelet[3114]: E1212 18:15:35.449290 3114 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:15:35.449308 kubelet[3114]: W1212 18:15:35.449301 3114 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:15:35.449356 kubelet[3114]: E1212 18:15:35.449311 3114 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:15:35.844891 containerd[1848]: time="2025-12-12T18:15:35.844833654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:15:35.846378 containerd[1848]: time="2025-12-12T18:15:35.846333355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 12 18:15:35.848340 containerd[1848]: time="2025-12-12T18:15:35.848295412Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:15:35.852966 containerd[1848]: time="2025-12-12T18:15:35.852905468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:15:35.853550 containerd[1848]: time="2025-12-12T18:15:35.853208814Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.577579041s" Dec 12 18:15:35.853550 containerd[1848]: time="2025-12-12T18:15:35.853238850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 12 18:15:35.858740 containerd[1848]: time="2025-12-12T18:15:35.858692791Z" level=info msg="CreateContainer within sandbox \"95c9c874312e99eac1aa1162fb76b72e58df4120f79f6d6d589cbd8cfef08201\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 18:15:35.869933 containerd[1848]: time="2025-12-12T18:15:35.869880463Z" level=info msg="Container dbdf1314e24d4a9fc4cddfaac355ce8592d17399b8755d24779057d55c48caf1: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:15:35.882105 containerd[1848]: time="2025-12-12T18:15:35.882029074Z" level=info msg="CreateContainer within sandbox \"95c9c874312e99eac1aa1162fb76b72e58df4120f79f6d6d589cbd8cfef08201\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"dbdf1314e24d4a9fc4cddfaac355ce8592d17399b8755d24779057d55c48caf1\"" Dec 12 18:15:35.882736 containerd[1848]: time="2025-12-12T18:15:35.882574465Z" level=info msg="StartContainer for \"dbdf1314e24d4a9fc4cddfaac355ce8592d17399b8755d24779057d55c48caf1\"" Dec 12 18:15:35.885676 containerd[1848]: time="2025-12-12T18:15:35.884210498Z" level=info msg="connecting to shim dbdf1314e24d4a9fc4cddfaac355ce8592d17399b8755d24779057d55c48caf1" address="unix:///run/containerd/s/d1ec697bb8577309eb8a392996774d7fe675789b132343e0df0d311cfc34031f" protocol=ttrpc version=3 Dec 12 18:15:35.927949 systemd[1]: Started cri-containerd-dbdf1314e24d4a9fc4cddfaac355ce8592d17399b8755d24779057d55c48caf1.scope - libcontainer container dbdf1314e24d4a9fc4cddfaac355ce8592d17399b8755d24779057d55c48caf1. Dec 12 18:15:35.988000 audit: BPF prog-id=166 op=LOAD Dec 12 18:15:35.988000 audit[3855]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3695 pid=3855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:35.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462646631333134653234643461396663346364646661616333353563 Dec 12 18:15:35.988000 audit: BPF prog-id=167 op=LOAD Dec 12 18:15:35.988000 audit[3855]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3695 pid=3855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:35.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462646631333134653234643461396663346364646661616333353563 Dec 12 18:15:35.988000 audit: BPF prog-id=167 op=UNLOAD Dec 12 18:15:35.988000 audit[3855]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3695 pid=3855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:35.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462646631333134653234643461396663346364646661616333353563 Dec 12 18:15:35.988000 audit: BPF prog-id=166 op=UNLOAD Dec 12 18:15:35.988000 audit[3855]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3695 pid=3855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:35.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462646631333134653234643461396663346364646661616333353563 Dec 12 18:15:35.988000 audit: BPF prog-id=168 op=LOAD Dec 12 18:15:35.988000 audit[3855]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3695 pid=3855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:35.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462646631333134653234643461396663346364646661616333353563 Dec 12 18:15:36.009367 containerd[1848]: time="2025-12-12T18:15:36.009237485Z" level=info msg="StartContainer for \"dbdf1314e24d4a9fc4cddfaac355ce8592d17399b8755d24779057d55c48caf1\" returns successfully" Dec 12 18:15:36.016997 systemd[1]: cri-containerd-dbdf1314e24d4a9fc4cddfaac355ce8592d17399b8755d24779057d55c48caf1.scope: Deactivated successfully. Dec 12 18:15:36.019747 containerd[1848]: time="2025-12-12T18:15:36.019683502Z" level=info msg="received container exit event container_id:\"dbdf1314e24d4a9fc4cddfaac355ce8592d17399b8755d24779057d55c48caf1\" id:\"dbdf1314e24d4a9fc4cddfaac355ce8592d17399b8755d24779057d55c48caf1\" pid:3869 exited_at:{seconds:1765563336 nanos:19321796}" Dec 12 18:15:36.024000 audit: BPF prog-id=168 op=UNLOAD Dec 12 18:15:36.040700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbdf1314e24d4a9fc4cddfaac355ce8592d17399b8755d24779057d55c48caf1-rootfs.mount: Deactivated successfully. Dec 12 18:15:36.331512 containerd[1848]: time="2025-12-12T18:15:36.331352383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 18:15:37.265533 kubelet[3114]: E1212 18:15:37.265482 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9klxk" podUID="4cc6cae7-6092-4840-b2c3-065b3bb220f3" Dec 12 18:15:39.265354 kubelet[3114]: E1212 18:15:39.265286 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9klxk" podUID="4cc6cae7-6092-4840-b2c3-065b3bb220f3" Dec 12 18:15:39.845817 containerd[1848]: time="2025-12-12T18:15:39.845769765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:15:39.847316 containerd[1848]: time="2025-12-12T18:15:39.847258924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 12 18:15:39.849634 containerd[1848]: time="2025-12-12T18:15:39.849534352Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:15:39.853070 containerd[1848]: time="2025-12-12T18:15:39.853016559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:15:39.853712 containerd[1848]: time="2025-12-12T18:15:39.853618301Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.522229505s" Dec 12 18:15:39.853712 containerd[1848]: time="2025-12-12T18:15:39.853654692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 12 18:15:39.858140 containerd[1848]: time="2025-12-12T18:15:39.858095400Z" level=info msg="CreateContainer within sandbox \"95c9c874312e99eac1aa1162fb76b72e58df4120f79f6d6d589cbd8cfef08201\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 18:15:39.869882 containerd[1848]: time="2025-12-12T18:15:39.869837260Z" level=info msg="Container 19457800a964e08205fd075f210491419dc1f8b88ff721750e0792f7a957d554: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:15:39.881892 containerd[1848]: time="2025-12-12T18:15:39.881835726Z" level=info msg="CreateContainer within sandbox \"95c9c874312e99eac1aa1162fb76b72e58df4120f79f6d6d589cbd8cfef08201\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"19457800a964e08205fd075f210491419dc1f8b88ff721750e0792f7a957d554\"" Dec 12 18:15:39.882426 containerd[1848]: time="2025-12-12T18:15:39.882391173Z" level=info msg="StartContainer for \"19457800a964e08205fd075f210491419dc1f8b88ff721750e0792f7a957d554\"" Dec 12 18:15:39.883642 containerd[1848]: time="2025-12-12T18:15:39.883571121Z" level=info msg="connecting to shim 19457800a964e08205fd075f210491419dc1f8b88ff721750e0792f7a957d554" address="unix:///run/containerd/s/d1ec697bb8577309eb8a392996774d7fe675789b132343e0df0d311cfc34031f" protocol=ttrpc version=3 Dec 12 18:15:39.907993 systemd[1]: Started cri-containerd-19457800a964e08205fd075f210491419dc1f8b88ff721750e0792f7a957d554.scope - libcontainer container 19457800a964e08205fd075f210491419dc1f8b88ff721750e0792f7a957d554. Dec 12 18:15:39.983000 audit: BPF prog-id=169 op=LOAD Dec 12 18:15:39.986097 kernel: kauditd_printk_skb: 34 callbacks suppressed Dec 12 18:15:39.986180 kernel: audit: type=1334 audit(1765563339.983:567): prog-id=169 op=LOAD Dec 12 18:15:39.983000 audit[3921]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3695 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:39.989537 kernel: audit: type=1300 audit(1765563339.983:567): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3695 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:39.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139343537383030613936346530383230356664303735663231303439 Dec 12 18:15:39.994343 kernel: audit: type=1327 audit(1765563339.983:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139343537383030613936346530383230356664303735663231303439 Dec 12 18:15:39.983000 audit: BPF prog-id=170 op=LOAD Dec 12 18:15:39.999680 kernel: audit: type=1334 audit(1765563339.983:568): prog-id=170 op=LOAD Dec 12 18:15:39.999752 kernel: audit: type=1300 audit(1765563339.983:568): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3695 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:39.983000 audit[3921]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3695 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:39.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139343537383030613936346530383230356664303735663231303439 Dec 12 18:15:40.005324 kernel: audit: type=1327 audit(1765563339.983:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139343537383030613936346530383230356664303735663231303439 Dec 12 18:15:39.983000 audit: BPF prog-id=170 op=UNLOAD Dec 12 18:15:40.008754 kernel: audit: type=1334 audit(1765563339.983:569): prog-id=170 op=UNLOAD Dec 12 18:15:39.983000 audit[3921]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3695 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:40.011525 kernel: audit: type=1300 audit(1765563339.983:569): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3695 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:39.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139343537383030613936346530383230356664303735663231303439 Dec 12 18:15:40.016422 kernel: audit: type=1327 audit(1765563339.983:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139343537383030613936346530383230356664303735663231303439 Dec 12 18:15:40.017608 containerd[1848]: time="2025-12-12T18:15:40.017560323Z" level=info msg="StartContainer for \"19457800a964e08205fd075f210491419dc1f8b88ff721750e0792f7a957d554\" returns successfully" Dec 12 18:15:39.983000 audit: BPF prog-id=169 op=UNLOAD Dec 12 18:15:40.019696 kernel: audit: type=1334 audit(1765563339.983:570): prog-id=169 op=UNLOAD Dec 12 18:15:39.983000 audit[3921]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3695 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:39.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139343537383030613936346530383230356664303735663231303439 Dec 12 18:15:39.983000 audit: BPF prog-id=171 op=LOAD Dec 12 18:15:39.983000 audit[3921]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3695 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:39.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139343537383030613936346530383230356664303735663231303439 Dec 12 18:15:40.416150 systemd[1]: cri-containerd-19457800a964e08205fd075f210491419dc1f8b88ff721750e0792f7a957d554.scope: Deactivated successfully. Dec 12 18:15:40.416458 systemd[1]: cri-containerd-19457800a964e08205fd075f210491419dc1f8b88ff721750e0792f7a957d554.scope: Consumed 528ms CPU time, 192.6M memory peak, 171.3M written to disk. Dec 12 18:15:40.417943 containerd[1848]: time="2025-12-12T18:15:40.417908782Z" level=info msg="received container exit event container_id:\"19457800a964e08205fd075f210491419dc1f8b88ff721750e0792f7a957d554\" id:\"19457800a964e08205fd075f210491419dc1f8b88ff721750e0792f7a957d554\" pid:3934 exited_at:{seconds:1765563340 nanos:417648286}" Dec 12 18:15:40.423000 audit: BPF prog-id=171 op=UNLOAD Dec 12 18:15:40.433865 kubelet[3114]: I1212 18:15:40.433826 3114 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 18:15:40.438379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19457800a964e08205fd075f210491419dc1f8b88ff721750e0792f7a957d554-rootfs.mount: Deactivated successfully. Dec 12 18:15:40.504916 systemd[1]: Created slice kubepods-burstable-podfd160278_5916_4c1e_b8d8_6da7c31295f3.slice - libcontainer container kubepods-burstable-podfd160278_5916_4c1e_b8d8_6da7c31295f3.slice. Dec 12 18:15:40.510696 systemd[1]: Created slice kubepods-besteffort-podd5affb4a_a5c2_4140_9517_3b93721ff225.slice - libcontainer container kubepods-besteffort-podd5affb4a_a5c2_4140_9517_3b93721ff225.slice. Dec 12 18:15:40.515740 systemd[1]: Created slice kubepods-besteffort-podfffa851a_7d3a_4af7_80b6_6f040212a19b.slice - libcontainer container kubepods-besteffort-podfffa851a_7d3a_4af7_80b6_6f040212a19b.slice. Dec 12 18:15:40.521818 systemd[1]: Created slice kubepods-burstable-pod1fa5f3cd_fa74_4d8b_91cb_263e77629d8c.slice - libcontainer container kubepods-burstable-pod1fa5f3cd_fa74_4d8b_91cb_263e77629d8c.slice. Dec 12 18:15:40.527552 systemd[1]: Created slice kubepods-besteffort-pode457a45d_7eaa_42e2_95fa_b7011451de77.slice - libcontainer container kubepods-besteffort-pode457a45d_7eaa_42e2_95fa_b7011451de77.slice. Dec 12 18:15:40.533558 systemd[1]: Created slice kubepods-besteffort-poda83fad8b_d566_4d95_b74e_3a16ee22e614.slice - libcontainer container kubepods-besteffort-poda83fad8b_d566_4d95_b74e_3a16ee22e614.slice. Dec 12 18:15:40.543040 systemd[1]: Created slice kubepods-besteffort-pod18dc805b_80db_4bdb_aaad_98fdbaf9a934.slice - libcontainer container kubepods-besteffort-pod18dc805b_80db_4bdb_aaad_98fdbaf9a934.slice. Dec 12 18:15:40.580840 kubelet[3114]: I1212 18:15:40.580778 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a83fad8b-d566-4d95-b74e-3a16ee22e614-config\") pod \"goldmane-666569f655-4vrl2\" (UID: \"a83fad8b-d566-4d95-b74e-3a16ee22e614\") " pod="calico-system/goldmane-666569f655-4vrl2" Dec 12 18:15:40.580840 kubelet[3114]: I1212 18:15:40.580826 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a83fad8b-d566-4d95-b74e-3a16ee22e614-goldmane-ca-bundle\") pod \"goldmane-666569f655-4vrl2\" (UID: \"a83fad8b-d566-4d95-b74e-3a16ee22e614\") " pod="calico-system/goldmane-666569f655-4vrl2" Dec 12 18:15:40.580840 kubelet[3114]: I1212 18:15:40.580846 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxhcd\" (UniqueName: \"kubernetes.io/projected/a83fad8b-d566-4d95-b74e-3a16ee22e614-kube-api-access-wxhcd\") pod \"goldmane-666569f655-4vrl2\" (UID: \"a83fad8b-d566-4d95-b74e-3a16ee22e614\") " pod="calico-system/goldmane-666569f655-4vrl2" Dec 12 18:15:40.581089 kubelet[3114]: I1212 18:15:40.580867 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a83fad8b-d566-4d95-b74e-3a16ee22e614-goldmane-key-pair\") pod \"goldmane-666569f655-4vrl2\" (UID: \"a83fad8b-d566-4d95-b74e-3a16ee22e614\") " pod="calico-system/goldmane-666569f655-4vrl2" Dec 12 18:15:40.581089 kubelet[3114]: I1212 18:15:40.580913 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxfb4\" (UniqueName: \"kubernetes.io/projected/fffa851a-7d3a-4af7-80b6-6f040212a19b-kube-api-access-cxfb4\") pod \"calico-apiserver-646c8584fc-mwbp6\" (UID: \"fffa851a-7d3a-4af7-80b6-6f040212a19b\") " pod="calico-apiserver/calico-apiserver-646c8584fc-mwbp6" Dec 12 18:15:40.581089 kubelet[3114]: I1212 18:15:40.580951 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18dc805b-80db-4bdb-aaad-98fdbaf9a934-whisker-ca-bundle\") pod \"whisker-58d485f596-jjsv8\" (UID: \"18dc805b-80db-4bdb-aaad-98fdbaf9a934\") " pod="calico-system/whisker-58d485f596-jjsv8" Dec 12 18:15:40.581089 kubelet[3114]: I1212 18:15:40.580973 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf2b2\" (UniqueName: \"kubernetes.io/projected/fd160278-5916-4c1e-b8d8-6da7c31295f3-kube-api-access-zf2b2\") pod \"coredns-674b8bbfcf-zjmxw\" (UID: \"fd160278-5916-4c1e-b8d8-6da7c31295f3\") " pod="kube-system/coredns-674b8bbfcf-zjmxw" Dec 12 18:15:40.581089 kubelet[3114]: I1212 18:15:40.581003 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd160278-5916-4c1e-b8d8-6da7c31295f3-config-volume\") pod \"coredns-674b8bbfcf-zjmxw\" (UID: \"fd160278-5916-4c1e-b8d8-6da7c31295f3\") " pod="kube-system/coredns-674b8bbfcf-zjmxw" Dec 12 18:15:40.581198 kubelet[3114]: I1212 18:15:40.581031 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d5affb4a-a5c2-4140-9517-3b93721ff225-calico-apiserver-certs\") pod \"calico-apiserver-646c8584fc-2p5wd\" (UID: \"d5affb4a-a5c2-4140-9517-3b93721ff225\") " pod="calico-apiserver/calico-apiserver-646c8584fc-2p5wd" Dec 12 18:15:40.581198 kubelet[3114]: I1212 18:15:40.581050 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1fa5f3cd-fa74-4d8b-91cb-263e77629d8c-config-volume\") pod \"coredns-674b8bbfcf-pt4l8\" (UID: \"1fa5f3cd-fa74-4d8b-91cb-263e77629d8c\") " pod="kube-system/coredns-674b8bbfcf-pt4l8" Dec 12 18:15:40.581198 kubelet[3114]: I1212 18:15:40.581153 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/18dc805b-80db-4bdb-aaad-98fdbaf9a934-whisker-backend-key-pair\") pod \"whisker-58d485f596-jjsv8\" (UID: \"18dc805b-80db-4bdb-aaad-98fdbaf9a934\") " pod="calico-system/whisker-58d485f596-jjsv8" Dec 12 18:15:40.581264 kubelet[3114]: I1212 18:15:40.581202 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r66d4\" (UniqueName: \"kubernetes.io/projected/18dc805b-80db-4bdb-aaad-98fdbaf9a934-kube-api-access-r66d4\") pod \"whisker-58d485f596-jjsv8\" (UID: \"18dc805b-80db-4bdb-aaad-98fdbaf9a934\") " pod="calico-system/whisker-58d485f596-jjsv8" Dec 12 18:15:40.581264 kubelet[3114]: I1212 18:15:40.581219 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j48kd\" (UniqueName: \"kubernetes.io/projected/1fa5f3cd-fa74-4d8b-91cb-263e77629d8c-kube-api-access-j48kd\") pod \"coredns-674b8bbfcf-pt4l8\" (UID: \"1fa5f3cd-fa74-4d8b-91cb-263e77629d8c\") " pod="kube-system/coredns-674b8bbfcf-pt4l8" Dec 12 18:15:40.581264 kubelet[3114]: I1212 18:15:40.581244 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e457a45d-7eaa-42e2-95fa-b7011451de77-tigera-ca-bundle\") pod \"calico-kube-controllers-5447cc8774-bt5vn\" (UID: \"e457a45d-7eaa-42e2-95fa-b7011451de77\") " pod="calico-system/calico-kube-controllers-5447cc8774-bt5vn" Dec 12 18:15:40.581330 kubelet[3114]: I1212 18:15:40.581267 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2cfp\" (UniqueName: \"kubernetes.io/projected/e457a45d-7eaa-42e2-95fa-b7011451de77-kube-api-access-p2cfp\") pod \"calico-kube-controllers-5447cc8774-bt5vn\" (UID: \"e457a45d-7eaa-42e2-95fa-b7011451de77\") " pod="calico-system/calico-kube-controllers-5447cc8774-bt5vn" Dec 12 18:15:40.581330 kubelet[3114]: I1212 18:15:40.581288 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fffa851a-7d3a-4af7-80b6-6f040212a19b-calico-apiserver-certs\") pod \"calico-apiserver-646c8584fc-mwbp6\" (UID: \"fffa851a-7d3a-4af7-80b6-6f040212a19b\") " pod="calico-apiserver/calico-apiserver-646c8584fc-mwbp6" Dec 12 18:15:40.581330 kubelet[3114]: I1212 18:15:40.581305 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8hc7\" (UniqueName: \"kubernetes.io/projected/d5affb4a-a5c2-4140-9517-3b93721ff225-kube-api-access-b8hc7\") pod \"calico-apiserver-646c8584fc-2p5wd\" (UID: \"d5affb4a-a5c2-4140-9517-3b93721ff225\") " pod="calico-apiserver/calico-apiserver-646c8584fc-2p5wd" Dec 12 18:15:40.807462 containerd[1848]: time="2025-12-12T18:15:40.807315069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zjmxw,Uid:fd160278-5916-4c1e-b8d8-6da7c31295f3,Namespace:kube-system,Attempt:0,}" Dec 12 18:15:40.814089 containerd[1848]: time="2025-12-12T18:15:40.814053780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646c8584fc-2p5wd,Uid:d5affb4a-a5c2-4140-9517-3b93721ff225,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:15:40.817675 containerd[1848]: time="2025-12-12T18:15:40.817617612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646c8584fc-mwbp6,Uid:fffa851a-7d3a-4af7-80b6-6f040212a19b,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:15:40.824805 containerd[1848]: time="2025-12-12T18:15:40.824762464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pt4l8,Uid:1fa5f3cd-fa74-4d8b-91cb-263e77629d8c,Namespace:kube-system,Attempt:0,}" Dec 12 18:15:40.830945 containerd[1848]: time="2025-12-12T18:15:40.830880003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5447cc8774-bt5vn,Uid:e457a45d-7eaa-42e2-95fa-b7011451de77,Namespace:calico-system,Attempt:0,}" Dec 12 18:15:40.837685 containerd[1848]: time="2025-12-12T18:15:40.837299417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4vrl2,Uid:a83fad8b-d566-4d95-b74e-3a16ee22e614,Namespace:calico-system,Attempt:0,}" Dec 12 18:15:40.846444 containerd[1848]: time="2025-12-12T18:15:40.846393244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58d485f596-jjsv8,Uid:18dc805b-80db-4bdb-aaad-98fdbaf9a934,Namespace:calico-system,Attempt:0,}" Dec 12 18:15:40.866738 containerd[1848]: time="2025-12-12T18:15:40.866684632Z" level=error msg="Failed to destroy network for sandbox \"f1559f09e8856f61b84e9e5397b3bca6f08135adbc4325a6245952e3211572dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:15:40.874511 containerd[1848]: time="2025-12-12T18:15:40.874456815Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zjmxw,Uid:fd160278-5916-4c1e-b8d8-6da7c31295f3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1559f09e8856f61b84e9e5397b3bca6f08135adbc4325a6245952e3211572dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:15:40.874754 kubelet[3114]: E1212 18:15:40.874718 3114 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1559f09e8856f61b84e9e5397b3bca6f08135adbc4325a6245952e3211572dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:15:40.874814 kubelet[3114]: E1212 18:15:40.874791 3114 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1559f09e8856f61b84e9e5397b3bca6f08135adbc4325a6245952e3211572dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zjmxw" Dec 12 18:15:40.874841 kubelet[3114]: E1212 18:15:40.874814 3114 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1559f09e8856f61b84e9e5397b3bca6f08135adbc4325a6245952e3211572dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zjmxw" Dec 12 18:15:40.874899 kubelet[3114]: E1212 18:15:40.874876 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-zjmxw_kube-system(fd160278-5916-4c1e-b8d8-6da7c31295f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-zjmxw_kube-system(fd160278-5916-4c1e-b8d8-6da7c31295f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1559f09e8856f61b84e9e5397b3bca6f08135adbc4325a6245952e3211572dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zjmxw" podUID="fd160278-5916-4c1e-b8d8-6da7c31295f3" Dec 12 18:15:40.882267 containerd[1848]: time="2025-12-12T18:15:40.882217404Z" level=error msg="Failed to destroy network for sandbox \"e4d4ae7f6b38ab2493caa8a1952194f8d3c30d390edd63f0baaa355f7431ac28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:15:40.884315 systemd[1]: run-netns-cni\x2d3c343865\x2d734a\x2d057c\x2d73e0\x2df654c12cbe8a.mount: Deactivated successfully. Dec 12 18:15:40.887461 containerd[1848]: time="2025-12-12T18:15:40.887425113Z" level=error msg="Failed to destroy network for sandbox \"debf7c8997ba5b0a3e3a85451d940b3dfb53c90c64a4f6489ded0265f7794b03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:15:40.889723 systemd[1]: run-netns-cni\x2de45b82ad\x2d8ac7\x2d116f\x2d3b42\x2d51792c228896.mount: Deactivated successfully. Dec 12 18:15:40.891537 containerd[1848]: time="2025-12-12T18:15:40.891334488Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646c8584fc-2p5wd,Uid:d5affb4a-a5c2-4140-9517-3b93721ff225,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d4ae7f6b38ab2493caa8a1952194f8d3c30d390edd63f0baaa355f7431ac28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:15:40.891658 kubelet[3114]: E1212 18:15:40.891626 3114 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d4ae7f6b38ab2493caa8a1952194f8d3c30d390edd63f0baaa355f7431ac28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:15:40.891797 kubelet[3114]: E1212 18:15:40.891770 3114 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d4ae7f6b38ab2493caa8a1952194f8d3c30d390edd63f0baaa355f7431ac28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-646c8584fc-2p5wd" Dec 12 18:15:40.891835 kubelet[3114]: E1212 18:15:40.891798 3114 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d4ae7f6b38ab2493caa8a1952194f8d3c30d390edd63f0baaa355f7431ac28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-646c8584fc-2p5wd" Dec 12 18:15:40.891895 kubelet[3114]: E1212 18:15:40.891871 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-646c8584fc-2p5wd_calico-apiserver(d5affb4a-a5c2-4140-9517-3b93721ff225)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-646c8584fc-2p5wd_calico-apiserver(d5affb4a-a5c2-4140-9517-3b93721ff225)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4d4ae7f6b38ab2493caa8a1952194f8d3c30d390edd63f0baaa355f7431ac28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-646c8584fc-2p5wd" podUID="d5affb4a-a5c2-4140-9517-3b93721ff225" Dec 12 18:15:40.894926 containerd[1848]: time="2025-12-12T18:15:40.894841939Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646c8584fc-mwbp6,Uid:fffa851a-7d3a-4af7-80b6-6f040212a19b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"debf7c8997ba5b0a3e3a85451d940b3dfb53c90c64a4f6489ded0265f7794b03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:15:40.895077 kubelet[3114]: E1212 18:15:40.895055 3114 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"debf7c8997ba5b0a3e3a85451d940b3dfb53c90c64a4f6489ded0265f7794b03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:15:40.895128 kubelet[3114]: E1212 18:15:40.895091 3114 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"debf7c8997ba5b0a3e3a85451d940b3dfb53c90c64a4f6489ded0265f7794b03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-646c8584fc-mwbp6" Dec 12 18:15:40.895128 kubelet[3114]: E1212 18:15:40.895110 3114 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"debf7c8997ba5b0a3e3a85451d940b3dfb53c90c64a4f6489ded0265f7794b03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-646c8584fc-mwbp6" Dec 12 18:15:40.895192 kubelet[3114]: E1212 18:15:40.895145 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-646c8584fc-mwbp6_calico-apiserver(fffa851a-7d3a-4af7-80b6-6f040212a19b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-646c8584fc-mwbp6_calico-apiserver(fffa851a-7d3a-4af7-80b6-6f040212a19b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"debf7c8997ba5b0a3e3a85451d940b3dfb53c90c64a4f6489ded0265f7794b03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-646c8584fc-mwbp6" podUID="fffa851a-7d3a-4af7-80b6-6f040212a19b" Dec 12 18:15:40.902489 containerd[1848]: time="2025-12-12T18:15:40.902436559Z" level=error msg="Failed to destroy network for sandbox \"7bf0ad3f225de6ee05adef53ad79bdcfd20011b46730d5f48104eb33f3141631\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:15:40.904438 systemd[1]: run-netns-cni\x2d177adb7e\x2de002\x2da6de\x2df949\x2d7c29fb2dae22.mount: Deactivated successfully. Dec 12 18:15:40.909575 containerd[1848]: time="2025-12-12T18:15:40.909515430Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pt4l8,Uid:1fa5f3cd-fa74-4d8b-91cb-263e77629d8c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bf0ad3f225de6ee05adef53ad79bdcfd20011b46730d5f48104eb33f3141631\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:15:40.909831 kubelet[3114]: E1212 18:15:40.909796 3114 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bf0ad3f225de6ee05adef53ad79bdcfd20011b46730d5f48104eb33f3141631\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:15:40.909879 kubelet[3114]: E1212 18:15:40.909855 3114 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bf0ad3f225de6ee05adef53ad79bdcfd20011b46730d5f48104eb33f3141631\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pt4l8" Dec 12 18:15:40.909904 kubelet[3114]: E1212 18:15:40.909875 3114 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bf0ad3f225de6ee05adef53ad79bdcfd20011b46730d5f48104eb33f3141631\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pt4l8" Dec 12 18:15:40.909976 kubelet[3114]: E1212 18:15:40.909955 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pt4l8_kube-system(1fa5f3cd-fa74-4d8b-91cb-263e77629d8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pt4l8_kube-system(1fa5f3cd-fa74-4d8b-91cb-263e77629d8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bf0ad3f225de6ee05adef53ad79bdcfd20011b46730d5f48104eb33f3141631\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pt4l8" podUID="1fa5f3cd-fa74-4d8b-91cb-263e77629d8c" Dec 12 18:15:40.913175 containerd[1848]: time="2025-12-12T18:15:40.913135360Z" level=error msg="Failed to destroy network for sandbox \"3a109bb41f4b84f0acb41cc2fb5a66b430cbbc55e0d06c10f650e28bd62d7fae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:15:40.915255 systemd[1]: run-netns-cni\x2d82351406\x2d5fc0\x2df679\x2dd37d\x2d81bd6f815866.mount: Deactivated successfully. Dec 12 18:15:40.919415 containerd[1848]: time="2025-12-12T18:15:40.919361551Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5447cc8774-bt5vn,Uid:e457a45d-7eaa-42e2-95fa-b7011451de77,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a109bb41f4b84f0acb41cc2fb5a66b430cbbc55e0d06c10f650e28bd62d7fae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:15:40.919673 kubelet[3114]: E1212 18:15:40.919625 3114 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a109bb41f4b84f0acb41cc2fb5a66b430cbbc55e0d06c10f650e28bd62d7fae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:15:40.919718 kubelet[3114]: E1212 18:15:40.919706 3114 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a109bb41f4b84f0acb41cc2fb5a66b430cbbc55e0d06c10f650e28bd62d7fae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5447cc8774-bt5vn" Dec 12 18:15:40.919747 kubelet[3114]: E1212 18:15:40.919728 3114 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a109bb41f4b84f0acb41cc2fb5a66b430cbbc55e0d06c10f650e28bd62d7fae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5447cc8774-bt5vn" Dec 12 18:15:40.919833 kubelet[3114]: E1212 18:15:40.919802 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5447cc8774-bt5vn_calico-system(e457a45d-7eaa-42e2-95fa-b7011451de77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5447cc8774-bt5vn_calico-system(e457a45d-7eaa-42e2-95fa-b7011451de77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a109bb41f4b84f0acb41cc2fb5a66b430cbbc55e0d06c10f650e28bd62d7fae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5447cc8774-bt5vn" podUID="e457a45d-7eaa-42e2-95fa-b7011451de77" Dec 12 18:15:40.921880 containerd[1848]: time="2025-12-12T18:15:40.921836298Z" level=error msg="Failed to destroy network for sandbox \"f0a07108713ba6e031fde9e4bdb4ec21b74ec9e51e03fb51811d55eaed10bb2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:15:40.922502 containerd[1848]: time="2025-12-12T18:15:40.922461045Z" level=error msg="Failed to destroy network for sandbox \"7d9b65bf24eb77ff429b1220ad83ffb0f80a3c42bf178aa75f9b2c8649f9c5b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:15:40.924955 containerd[1848]: time="2025-12-12T18:15:40.924912725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4vrl2,Uid:a83fad8b-d566-4d95-b74e-3a16ee22e614,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0a07108713ba6e031fde9e4bdb4ec21b74ec9e51e03fb51811d55eaed10bb2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:15:40.925122 kubelet[3114]: E1212 18:15:40.925083 3114 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0a07108713ba6e031fde9e4bdb4ec21b74ec9e51e03fb51811d55eaed10bb2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:15:40.925159 kubelet[3114]: E1212 18:15:40.925140 3114 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0a07108713ba6e031fde9e4bdb4ec21b74ec9e51e03fb51811d55eaed10bb2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-4vrl2" Dec 12 18:15:40.925196 kubelet[3114]: E1212 18:15:40.925161 3114 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0a07108713ba6e031fde9e4bdb4ec21b74ec9e51e03fb51811d55eaed10bb2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-4vrl2" Dec 12 18:15:40.925257 kubelet[3114]: E1212 18:15:40.925237 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-4vrl2_calico-system(a83fad8b-d566-4d95-b74e-3a16ee22e614)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-4vrl2_calico-system(a83fad8b-d566-4d95-b74e-3a16ee22e614)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0a07108713ba6e031fde9e4bdb4ec21b74ec9e51e03fb51811d55eaed10bb2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-4vrl2" podUID="a83fad8b-d566-4d95-b74e-3a16ee22e614" Dec 12 18:15:40.930670 containerd[1848]: time="2025-12-12T18:15:40.930624654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58d485f596-jjsv8,Uid:18dc805b-80db-4bdb-aaad-98fdbaf9a934,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d9b65bf24eb77ff429b1220ad83ffb0f80a3c42bf178aa75f9b2c8649f9c5b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:15:40.930875 kubelet[3114]: E1212 18:15:40.930834 3114 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d9b65bf24eb77ff429b1220ad83ffb0f80a3c42bf178aa75f9b2c8649f9c5b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:15:40.930928 kubelet[3114]: E1212 18:15:40.930887 3114 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d9b65bf24eb77ff429b1220ad83ffb0f80a3c42bf178aa75f9b2c8649f9c5b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58d485f596-jjsv8" Dec 12 18:15:40.930928 kubelet[3114]: E1212 18:15:40.930917 3114 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d9b65bf24eb77ff429b1220ad83ffb0f80a3c42bf178aa75f9b2c8649f9c5b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58d485f596-jjsv8" Dec 12 18:15:40.930986 kubelet[3114]: E1212 18:15:40.930966 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-58d485f596-jjsv8_calico-system(18dc805b-80db-4bdb-aaad-98fdbaf9a934)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-58d485f596-jjsv8_calico-system(18dc805b-80db-4bdb-aaad-98fdbaf9a934)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d9b65bf24eb77ff429b1220ad83ffb0f80a3c42bf178aa75f9b2c8649f9c5b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-58d485f596-jjsv8" podUID="18dc805b-80db-4bdb-aaad-98fdbaf9a934" Dec 12 18:15:41.270342 systemd[1]: Created slice kubepods-besteffort-pod4cc6cae7_6092_4840_b2c3_065b3bb220f3.slice - libcontainer container kubepods-besteffort-pod4cc6cae7_6092_4840_b2c3_065b3bb220f3.slice. Dec 12 18:15:41.272357 containerd[1848]: time="2025-12-12T18:15:41.272297174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9klxk,Uid:4cc6cae7-6092-4840-b2c3-065b3bb220f3,Namespace:calico-system,Attempt:0,}" Dec 12 18:15:41.334607 containerd[1848]: time="2025-12-12T18:15:41.334542232Z" level=error msg="Failed to destroy network for sandbox \"fec3a82b6b2528f438d682243b7c827272af905dc55d85d91b6bcca3226f9768\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:15:41.339653 containerd[1848]: time="2025-12-12T18:15:41.339587739Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9klxk,Uid:4cc6cae7-6092-4840-b2c3-065b3bb220f3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fec3a82b6b2528f438d682243b7c827272af905dc55d85d91b6bcca3226f9768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:15:41.339832 kubelet[3114]: E1212 18:15:41.339796 3114 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fec3a82b6b2528f438d682243b7c827272af905dc55d85d91b6bcca3226f9768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:15:41.339876 kubelet[3114]: E1212 18:15:41.339854 3114 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fec3a82b6b2528f438d682243b7c827272af905dc55d85d91b6bcca3226f9768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9klxk" Dec 12 18:15:41.339909 kubelet[3114]: E1212 18:15:41.339874 3114 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fec3a82b6b2528f438d682243b7c827272af905dc55d85d91b6bcca3226f9768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9klxk" Dec 12 18:15:41.339957 kubelet[3114]: E1212 18:15:41.339929 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9klxk_calico-system(4cc6cae7-6092-4840-b2c3-065b3bb220f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9klxk_calico-system(4cc6cae7-6092-4840-b2c3-065b3bb220f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fec3a82b6b2528f438d682243b7c827272af905dc55d85d91b6bcca3226f9768\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9klxk" podUID="4cc6cae7-6092-4840-b2c3-065b3bb220f3" Dec 12 18:15:41.344152 containerd[1848]: time="2025-12-12T18:15:41.344113632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 18:15:41.871507 systemd[1]: run-netns-cni\x2d9025627d\x2dbe9b\x2d4d20\x2df00c\x2d03f542c30cd1.mount: Deactivated successfully. Dec 12 18:15:41.871600 systemd[1]: run-netns-cni\x2d43bf1fab\x2db681\x2de118\x2df8fe\x2da7d1c9b52a5d.mount: Deactivated successfully. Dec 12 18:15:47.591619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2950545269.mount: Deactivated successfully. Dec 12 18:15:47.612426 containerd[1848]: time="2025-12-12T18:15:47.612302514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:15:47.614254 containerd[1848]: time="2025-12-12T18:15:47.614193565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 12 18:15:47.615997 containerd[1848]: time="2025-12-12T18:15:47.615957120Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:15:47.618520 containerd[1848]: time="2025-12-12T18:15:47.618469623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:15:47.618939 containerd[1848]: time="2025-12-12T18:15:47.618898960Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.274750636s" Dec 12 18:15:47.618939 containerd[1848]: time="2025-12-12T18:15:47.618928340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 12 18:15:47.632795 containerd[1848]: time="2025-12-12T18:15:47.632754217Z" level=info msg="CreateContainer within sandbox \"95c9c874312e99eac1aa1162fb76b72e58df4120f79f6d6d589cbd8cfef08201\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 18:15:47.648788 containerd[1848]: time="2025-12-12T18:15:47.648739534Z" level=info msg="Container 2ccc181a17ec44738f054a6b627ffa645359518894c870cc97ecbb222a95a469: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:15:47.659875 containerd[1848]: time="2025-12-12T18:15:47.659837059Z" level=info msg="CreateContainer within sandbox \"95c9c874312e99eac1aa1162fb76b72e58df4120f79f6d6d589cbd8cfef08201\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2ccc181a17ec44738f054a6b627ffa645359518894c870cc97ecbb222a95a469\"" Dec 12 18:15:47.660603 containerd[1848]: time="2025-12-12T18:15:47.660506827Z" level=info msg="StartContainer for \"2ccc181a17ec44738f054a6b627ffa645359518894c870cc97ecbb222a95a469\"" Dec 12 18:15:47.661656 containerd[1848]: time="2025-12-12T18:15:47.661635719Z" level=info msg="connecting to shim 2ccc181a17ec44738f054a6b627ffa645359518894c870cc97ecbb222a95a469" address="unix:///run/containerd/s/d1ec697bb8577309eb8a392996774d7fe675789b132343e0df0d311cfc34031f" protocol=ttrpc version=3 Dec 12 18:15:47.679870 systemd[1]: Started cri-containerd-2ccc181a17ec44738f054a6b627ffa645359518894c870cc97ecbb222a95a469.scope - libcontainer container 2ccc181a17ec44738f054a6b627ffa645359518894c870cc97ecbb222a95a469. Dec 12 18:15:47.746000 audit: BPF prog-id=172 op=LOAD Dec 12 18:15:47.748242 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 12 18:15:47.748327 kernel: audit: type=1334 audit(1765563347.746:573): prog-id=172 op=LOAD Dec 12 18:15:47.746000 audit[4287]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3695 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:47.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263636331383161313765633434373338663035346136623632376666 Dec 12 18:15:47.757675 kernel: audit: type=1300 audit(1765563347.746:573): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3695 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:47.757736 kernel: audit: type=1327 audit(1765563347.746:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263636331383161313765633434373338663035346136623632376666 Dec 12 18:15:47.746000 audit: BPF prog-id=173 op=LOAD Dec 12 18:15:47.761671 kernel: audit: type=1334 audit(1765563347.746:574): prog-id=173 op=LOAD Dec 12 18:15:47.746000 audit[4287]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3695 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:47.764792 kernel: audit: type=1300 audit(1765563347.746:574): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3695 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:47.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263636331383161313765633434373338663035346136623632376666 Dec 12 18:15:47.770417 kernel: audit: type=1327 audit(1765563347.746:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263636331383161313765633434373338663035346136623632376666 Dec 12 18:15:47.746000 audit: BPF prog-id=173 op=UNLOAD Dec 12 18:15:47.774411 kernel: audit: type=1334 audit(1765563347.746:575): prog-id=173 op=UNLOAD Dec 12 18:15:47.746000 audit[4287]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3695 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:47.777516 kernel: audit: type=1300 audit(1765563347.746:575): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3695 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:47.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263636331383161313765633434373338663035346136623632376666 Dec 12 18:15:47.782016 containerd[1848]: time="2025-12-12T18:15:47.781974689Z" level=info msg="StartContainer for \"2ccc181a17ec44738f054a6b627ffa645359518894c870cc97ecbb222a95a469\" returns successfully" Dec 12 18:15:47.784166 kernel: audit: type=1327 audit(1765563347.746:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263636331383161313765633434373338663035346136623632376666 Dec 12 18:15:47.746000 audit: BPF prog-id=172 op=UNLOAD Dec 12 18:15:47.788749 kernel: audit: type=1334 audit(1765563347.746:576): prog-id=172 op=UNLOAD Dec 12 18:15:47.746000 audit[4287]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3695 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:47.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263636331383161313765633434373338663035346136623632376666 Dec 12 18:15:47.746000 audit: BPF prog-id=174 op=LOAD Dec 12 18:15:47.746000 audit[4287]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3695 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:47.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263636331383161313765633434373338663035346136623632376666 Dec 12 18:15:47.864850 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 18:15:47.864959 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 18:15:48.133760 kubelet[3114]: I1212 18:15:48.133710 3114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18dc805b-80db-4bdb-aaad-98fdbaf9a934-whisker-ca-bundle\") pod \"18dc805b-80db-4bdb-aaad-98fdbaf9a934\" (UID: \"18dc805b-80db-4bdb-aaad-98fdbaf9a934\") " Dec 12 18:15:48.133760 kubelet[3114]: I1212 18:15:48.133763 3114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/18dc805b-80db-4bdb-aaad-98fdbaf9a934-whisker-backend-key-pair\") pod \"18dc805b-80db-4bdb-aaad-98fdbaf9a934\" (UID: \"18dc805b-80db-4bdb-aaad-98fdbaf9a934\") " Dec 12 18:15:48.134119 kubelet[3114]: I1212 18:15:48.134015 3114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r66d4\" (UniqueName: \"kubernetes.io/projected/18dc805b-80db-4bdb-aaad-98fdbaf9a934-kube-api-access-r66d4\") pod \"18dc805b-80db-4bdb-aaad-98fdbaf9a934\" (UID: \"18dc805b-80db-4bdb-aaad-98fdbaf9a934\") " Dec 12 18:15:48.134119 kubelet[3114]: I1212 18:15:48.134063 3114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18dc805b-80db-4bdb-aaad-98fdbaf9a934-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "18dc805b-80db-4bdb-aaad-98fdbaf9a934" (UID: "18dc805b-80db-4bdb-aaad-98fdbaf9a934"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 18:15:48.136109 kubelet[3114]: I1212 18:15:48.136061 3114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18dc805b-80db-4bdb-aaad-98fdbaf9a934-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "18dc805b-80db-4bdb-aaad-98fdbaf9a934" (UID: "18dc805b-80db-4bdb-aaad-98fdbaf9a934"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 18:15:48.136334 kubelet[3114]: I1212 18:15:48.136294 3114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18dc805b-80db-4bdb-aaad-98fdbaf9a934-kube-api-access-r66d4" (OuterVolumeSpecName: "kube-api-access-r66d4") pod "18dc805b-80db-4bdb-aaad-98fdbaf9a934" (UID: "18dc805b-80db-4bdb-aaad-98fdbaf9a934"). InnerVolumeSpecName "kube-api-access-r66d4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:15:48.235053 kubelet[3114]: I1212 18:15:48.234961 3114 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18dc805b-80db-4bdb-aaad-98fdbaf9a934-whisker-ca-bundle\") on node \"ci-4515-1-0-e-14f87f00b0\" DevicePath \"\"" Dec 12 18:15:48.235053 kubelet[3114]: I1212 18:15:48.235008 3114 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/18dc805b-80db-4bdb-aaad-98fdbaf9a934-whisker-backend-key-pair\") on node \"ci-4515-1-0-e-14f87f00b0\" DevicePath \"\"" Dec 12 18:15:48.235053 kubelet[3114]: I1212 18:15:48.235022 3114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r66d4\" (UniqueName: \"kubernetes.io/projected/18dc805b-80db-4bdb-aaad-98fdbaf9a934-kube-api-access-r66d4\") on node \"ci-4515-1-0-e-14f87f00b0\" DevicePath \"\"" Dec 12 18:15:48.271423 systemd[1]: Removed slice kubepods-besteffort-pod18dc805b_80db_4bdb_aaad_98fdbaf9a934.slice - libcontainer container kubepods-besteffort-pod18dc805b_80db_4bdb_aaad_98fdbaf9a934.slice. Dec 12 18:15:48.373378 kubelet[3114]: I1212 18:15:48.372961 3114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n7m4g" podStartSLOduration=1.832510761 podStartE2EDuration="18.372944235s" podCreationTimestamp="2025-12-12 18:15:30 +0000 UTC" firstStartedPulling="2025-12-12 18:15:31.079112454 +0000 UTC m=+24.891635567" lastFinishedPulling="2025-12-12 18:15:47.619545928 +0000 UTC m=+41.432069041" observedRunningTime="2025-12-12 18:15:48.372943118 +0000 UTC m=+42.185466253" watchObservedRunningTime="2025-12-12 18:15:48.372944235 +0000 UTC m=+42.185467370" Dec 12 18:15:48.425951 systemd[1]: Created slice kubepods-besteffort-poddc52e510_0fd6_4a21_9182_a1df4018bab8.slice - libcontainer container kubepods-besteffort-poddc52e510_0fd6_4a21_9182_a1df4018bab8.slice. Dec 12 18:15:48.536736 kubelet[3114]: I1212 18:15:48.536655 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dc52e510-0fd6-4a21-9182-a1df4018bab8-whisker-backend-key-pair\") pod \"whisker-7cbd858cfd-bgn9w\" (UID: \"dc52e510-0fd6-4a21-9182-a1df4018bab8\") " pod="calico-system/whisker-7cbd858cfd-bgn9w" Dec 12 18:15:48.536736 kubelet[3114]: I1212 18:15:48.536717 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc52e510-0fd6-4a21-9182-a1df4018bab8-whisker-ca-bundle\") pod \"whisker-7cbd858cfd-bgn9w\" (UID: \"dc52e510-0fd6-4a21-9182-a1df4018bab8\") " pod="calico-system/whisker-7cbd858cfd-bgn9w" Dec 12 18:15:48.536736 kubelet[3114]: I1212 18:15:48.536737 3114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svwkt\" (UniqueName: \"kubernetes.io/projected/dc52e510-0fd6-4a21-9182-a1df4018bab8-kube-api-access-svwkt\") pod \"whisker-7cbd858cfd-bgn9w\" (UID: \"dc52e510-0fd6-4a21-9182-a1df4018bab8\") " pod="calico-system/whisker-7cbd858cfd-bgn9w" Dec 12 18:15:48.592615 systemd[1]: var-lib-kubelet-pods-18dc805b\x2d80db\x2d4bdb\x2daaad\x2d98fdbaf9a934-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr66d4.mount: Deactivated successfully. Dec 12 18:15:48.592729 systemd[1]: var-lib-kubelet-pods-18dc805b\x2d80db\x2d4bdb\x2daaad\x2d98fdbaf9a934-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 18:15:48.730551 containerd[1848]: time="2025-12-12T18:15:48.730407078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cbd858cfd-bgn9w,Uid:dc52e510-0fd6-4a21-9182-a1df4018bab8,Namespace:calico-system,Attempt:0,}" Dec 12 18:15:48.836072 systemd-networkd[1745]: cali26a4c2f8b2a: Link UP Dec 12 18:15:48.836641 systemd-networkd[1745]: cali26a4c2f8b2a: Gained carrier Dec 12 18:15:48.848123 containerd[1848]: 2025-12-12 18:15:48.752 [INFO][4360] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:15:48.848123 containerd[1848]: 2025-12-12 18:15:48.763 [INFO][4360] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--e--14f87f00b0-k8s-whisker--7cbd858cfd--bgn9w-eth0 whisker-7cbd858cfd- calico-system dc52e510-0fd6-4a21-9182-a1df4018bab8 887 0 2025-12-12 18:15:48 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7cbd858cfd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4515-1-0-e-14f87f00b0 whisker-7cbd858cfd-bgn9w eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali26a4c2f8b2a [] [] }} ContainerID="41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f" Namespace="calico-system" Pod="whisker-7cbd858cfd-bgn9w" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-whisker--7cbd858cfd--bgn9w-" Dec 12 18:15:48.848123 containerd[1848]: 2025-12-12 18:15:48.763 [INFO][4360] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f" Namespace="calico-system" Pod="whisker-7cbd858cfd-bgn9w" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-whisker--7cbd858cfd--bgn9w-eth0" Dec 12 18:15:48.848123 containerd[1848]: 2025-12-12 18:15:48.788 [INFO][4377] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f" HandleID="k8s-pod-network.41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f" Workload="ci--4515--1--0--e--14f87f00b0-k8s-whisker--7cbd858cfd--bgn9w-eth0" Dec 12 18:15:48.848350 containerd[1848]: 2025-12-12 18:15:48.788 [INFO][4377] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f" HandleID="k8s-pod-network.41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f" Workload="ci--4515--1--0--e--14f87f00b0-k8s-whisker--7cbd858cfd--bgn9w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001397e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-e-14f87f00b0", "pod":"whisker-7cbd858cfd-bgn9w", "timestamp":"2025-12-12 18:15:48.787997806 +0000 UTC"}, Hostname:"ci-4515-1-0-e-14f87f00b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:15:48.848350 containerd[1848]: 2025-12-12 18:15:48.788 [INFO][4377] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:15:48.848350 containerd[1848]: 2025-12-12 18:15:48.788 [INFO][4377] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:15:48.848350 containerd[1848]: 2025-12-12 18:15:48.788 [INFO][4377] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-e-14f87f00b0' Dec 12 18:15:48.848350 containerd[1848]: 2025-12-12 18:15:48.796 [INFO][4377] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:48.848350 containerd[1848]: 2025-12-12 18:15:48.808 [INFO][4377] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:48.848350 containerd[1848]: 2025-12-12 18:15:48.812 [INFO][4377] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:48.848350 containerd[1848]: 2025-12-12 18:15:48.814 [INFO][4377] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:48.848350 containerd[1848]: 2025-12-12 18:15:48.816 [INFO][4377] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:48.848536 containerd[1848]: 2025-12-12 18:15:48.817 [INFO][4377] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:48.848536 containerd[1848]: 2025-12-12 18:15:48.818 [INFO][4377] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f Dec 12 18:15:48.848536 containerd[1848]: 2025-12-12 18:15:48.821 [INFO][4377] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:48.848536 containerd[1848]: 2025-12-12 18:15:48.827 [INFO][4377] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.129/26] block=192.168.15.128/26 handle="k8s-pod-network.41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:48.848536 containerd[1848]: 2025-12-12 18:15:48.827 [INFO][4377] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.129/26] handle="k8s-pod-network.41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:48.848536 containerd[1848]: 2025-12-12 18:15:48.827 [INFO][4377] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:15:48.848536 containerd[1848]: 2025-12-12 18:15:48.827 [INFO][4377] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.129/26] IPv6=[] ContainerID="41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f" HandleID="k8s-pod-network.41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f" Workload="ci--4515--1--0--e--14f87f00b0-k8s-whisker--7cbd858cfd--bgn9w-eth0" Dec 12 18:15:48.848777 containerd[1848]: 2025-12-12 18:15:48.829 [INFO][4360] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f" Namespace="calico-system" Pod="whisker-7cbd858cfd-bgn9w" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-whisker--7cbd858cfd--bgn9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--14f87f00b0-k8s-whisker--7cbd858cfd--bgn9w-eth0", GenerateName:"whisker-7cbd858cfd-", Namespace:"calico-system", SelfLink:"", UID:"dc52e510-0fd6-4a21-9182-a1df4018bab8", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 15, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7cbd858cfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-14f87f00b0", ContainerID:"", Pod:"whisker-7cbd858cfd-bgn9w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.15.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali26a4c2f8b2a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:15:48.848777 containerd[1848]: 2025-12-12 18:15:48.830 [INFO][4360] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.129/32] ContainerID="41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f" Namespace="calico-system" Pod="whisker-7cbd858cfd-bgn9w" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-whisker--7cbd858cfd--bgn9w-eth0" Dec 12 18:15:48.848856 containerd[1848]: 2025-12-12 18:15:48.830 [INFO][4360] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26a4c2f8b2a ContainerID="41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f" Namespace="calico-system" Pod="whisker-7cbd858cfd-bgn9w" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-whisker--7cbd858cfd--bgn9w-eth0" Dec 12 18:15:48.848856 containerd[1848]: 2025-12-12 18:15:48.836 [INFO][4360] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f" Namespace="calico-system" Pod="whisker-7cbd858cfd-bgn9w" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-whisker--7cbd858cfd--bgn9w-eth0" Dec 12 18:15:48.848894 containerd[1848]: 2025-12-12 18:15:48.837 [INFO][4360] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f" Namespace="calico-system" Pod="whisker-7cbd858cfd-bgn9w" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-whisker--7cbd858cfd--bgn9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--14f87f00b0-k8s-whisker--7cbd858cfd--bgn9w-eth0", GenerateName:"whisker-7cbd858cfd-", Namespace:"calico-system", SelfLink:"", UID:"dc52e510-0fd6-4a21-9182-a1df4018bab8", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 15, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7cbd858cfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-14f87f00b0", ContainerID:"41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f", Pod:"whisker-7cbd858cfd-bgn9w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.15.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali26a4c2f8b2a", MAC:"92:cb:76:8f:d6:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:15:48.848949 containerd[1848]: 2025-12-12 18:15:48.846 [INFO][4360] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f" Namespace="calico-system" Pod="whisker-7cbd858cfd-bgn9w" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-whisker--7cbd858cfd--bgn9w-eth0" Dec 12 18:15:48.874145 containerd[1848]: time="2025-12-12T18:15:48.874100196Z" level=info msg="connecting to shim 41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f" address="unix:///run/containerd/s/18364904f04fcf1ef086881c0caf7b44c49f213b51f2807fc5b622e08280b37c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:15:48.904923 systemd[1]: Started cri-containerd-41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f.scope - libcontainer container 41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f. Dec 12 18:15:48.914000 audit: BPF prog-id=175 op=LOAD Dec 12 18:15:48.914000 audit: BPF prog-id=176 op=LOAD Dec 12 18:15:48.914000 audit[4415]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4403 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:48.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431653730303839313963653631333864306136616562646636323663 Dec 12 18:15:48.914000 audit: BPF prog-id=176 op=UNLOAD Dec 12 18:15:48.914000 audit[4415]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4403 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:48.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431653730303839313963653631333864306136616562646636323663 Dec 12 18:15:48.914000 audit: BPF prog-id=177 op=LOAD Dec 12 18:15:48.914000 audit[4415]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4403 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:48.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431653730303839313963653631333864306136616562646636323663 Dec 12 18:15:48.915000 audit: BPF prog-id=178 op=LOAD Dec 12 18:15:48.915000 audit[4415]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4403 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:48.915000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431653730303839313963653631333864306136616562646636323663 Dec 12 18:15:48.915000 audit: BPF prog-id=178 op=UNLOAD Dec 12 18:15:48.915000 audit[4415]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4403 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:48.915000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431653730303839313963653631333864306136616562646636323663 Dec 12 18:15:48.915000 audit: BPF prog-id=177 op=UNLOAD Dec 12 18:15:48.915000 audit[4415]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4403 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:48.915000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431653730303839313963653631333864306136616562646636323663 Dec 12 18:15:48.915000 audit: BPF prog-id=179 op=LOAD Dec 12 18:15:48.915000 audit[4415]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4403 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:48.915000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431653730303839313963653631333864306136616562646636323663 Dec 12 18:15:48.947457 containerd[1848]: time="2025-12-12T18:15:48.947391943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cbd858cfd-bgn9w,Uid:dc52e510-0fd6-4a21-9182-a1df4018bab8,Namespace:calico-system,Attempt:0,} returns sandbox id \"41e7008919ce6138d0a6aebdf626cc05d0bff7933860d02fc8c5621ec1e4c30f\"" Dec 12 18:15:48.948925 containerd[1848]: time="2025-12-12T18:15:48.948898694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:15:49.259000 audit: BPF prog-id=180 op=LOAD Dec 12 18:15:49.259000 audit[4583]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc456b0b00 a2=98 a3=1fffffffffffffff items=0 ppid=4456 pid=4583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.259000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:15:49.259000 audit: BPF prog-id=180 op=UNLOAD Dec 12 18:15:49.259000 audit[4583]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc456b0ad0 a3=0 items=0 ppid=4456 pid=4583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.259000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:15:49.259000 audit: BPF prog-id=181 op=LOAD Dec 12 18:15:49.259000 audit[4583]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc456b09e0 a2=94 a3=3 items=0 ppid=4456 pid=4583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.259000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:15:49.259000 audit: BPF prog-id=181 op=UNLOAD Dec 12 18:15:49.259000 audit[4583]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc456b09e0 a2=94 a3=3 items=0 ppid=4456 pid=4583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.259000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:15:49.259000 audit: BPF prog-id=182 op=LOAD Dec 12 18:15:49.259000 audit[4583]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc456b0a20 a2=94 a3=7ffc456b0c00 items=0 ppid=4456 pid=4583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.259000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:15:49.259000 audit: BPF prog-id=182 op=UNLOAD Dec 12 18:15:49.259000 audit[4583]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc456b0a20 a2=94 a3=7ffc456b0c00 items=0 ppid=4456 pid=4583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.259000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:15:49.260000 audit: BPF prog-id=183 op=LOAD Dec 12 18:15:49.260000 audit[4584]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd8b155190 a2=98 a3=3 items=0 ppid=4456 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.260000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:15:49.260000 audit: BPF prog-id=183 op=UNLOAD Dec 12 18:15:49.260000 audit[4584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd8b155160 a3=0 items=0 ppid=4456 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.260000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:15:49.261000 audit: BPF prog-id=184 op=LOAD Dec 12 18:15:49.261000 audit[4584]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd8b154f80 a2=94 a3=54428f items=0 ppid=4456 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.261000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:15:49.261000 audit: BPF prog-id=184 op=UNLOAD Dec 12 18:15:49.261000 audit[4584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd8b154f80 a2=94 a3=54428f items=0 ppid=4456 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.261000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:15:49.261000 audit: BPF prog-id=185 op=LOAD Dec 12 18:15:49.261000 audit[4584]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd8b154fb0 a2=94 a3=2 items=0 ppid=4456 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.261000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:15:49.261000 audit: BPF prog-id=185 op=UNLOAD Dec 12 18:15:49.261000 audit[4584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd8b154fb0 a2=0 a3=2 items=0 ppid=4456 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.261000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:15:49.279079 containerd[1848]: time="2025-12-12T18:15:49.279017946Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:15:49.280926 containerd[1848]: time="2025-12-12T18:15:49.280892268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:15:49.280998 containerd[1848]: time="2025-12-12T18:15:49.280968419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:15:49.281152 kubelet[3114]: E1212 18:15:49.281108 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:15:49.281418 kubelet[3114]: E1212 18:15:49.281161 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:15:49.281444 kubelet[3114]: E1212 18:15:49.281324 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:32f7d598b8bd491099e853ab2d7a3772,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-svwkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cbd858cfd-bgn9w_calico-system(dc52e510-0fd6-4a21-9182-a1df4018bab8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:15:49.283182 containerd[1848]: time="2025-12-12T18:15:49.283139850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:15:49.415000 audit: BPF prog-id=186 op=LOAD Dec 12 18:15:49.415000 audit[4584]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd8b154e70 a2=94 a3=1 items=0 ppid=4456 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.415000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:15:49.415000 audit: BPF prog-id=186 op=UNLOAD Dec 12 18:15:49.415000 audit[4584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd8b154e70 a2=94 a3=1 items=0 ppid=4456 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.415000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:15:49.425000 audit: BPF prog-id=187 op=LOAD Dec 12 18:15:49.425000 audit[4584]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd8b154e60 a2=94 a3=4 items=0 ppid=4456 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.425000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:15:49.425000 audit: BPF prog-id=187 op=UNLOAD Dec 12 18:15:49.425000 audit[4584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd8b154e60 a2=0 a3=4 items=0 ppid=4456 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.425000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:15:49.425000 audit: BPF prog-id=188 op=LOAD Dec 12 18:15:49.425000 audit[4584]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd8b154cc0 a2=94 a3=5 items=0 ppid=4456 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.425000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:15:49.425000 audit: BPF prog-id=188 op=UNLOAD Dec 12 18:15:49.425000 audit[4584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd8b154cc0 a2=0 a3=5 items=0 ppid=4456 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.425000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:15:49.425000 audit: BPF prog-id=189 op=LOAD Dec 12 18:15:49.425000 audit[4584]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd8b154ee0 a2=94 a3=6 items=0 ppid=4456 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.425000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:15:49.425000 audit: BPF prog-id=189 op=UNLOAD Dec 12 18:15:49.425000 audit[4584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd8b154ee0 a2=0 a3=6 items=0 ppid=4456 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.425000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:15:49.425000 audit: BPF prog-id=190 op=LOAD Dec 12 18:15:49.425000 audit[4584]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd8b154690 a2=94 a3=88 items=0 ppid=4456 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.425000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:15:49.426000 audit: BPF prog-id=191 op=LOAD Dec 12 18:15:49.426000 audit[4584]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd8b154510 a2=94 a3=2 items=0 ppid=4456 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.426000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:15:49.426000 audit: BPF prog-id=191 op=UNLOAD Dec 12 18:15:49.426000 audit[4584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd8b154540 a2=0 a3=7ffd8b154640 items=0 ppid=4456 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.426000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:15:49.426000 audit: BPF prog-id=190 op=UNLOAD Dec 12 18:15:49.426000 audit[4584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=392a3d10 a2=0 a3=9adaabcdf313c06e items=0 ppid=4456 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.426000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:15:49.436000 audit: BPF prog-id=192 op=LOAD Dec 12 18:15:49.436000 audit[4615]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd8d888f0 a2=98 a3=1999999999999999 items=0 ppid=4456 pid=4615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.436000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:15:49.436000 audit: BPF prog-id=192 op=UNLOAD Dec 12 18:15:49.436000 audit[4615]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcd8d888c0 a3=0 items=0 ppid=4456 pid=4615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.436000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:15:49.436000 audit: BPF prog-id=193 op=LOAD Dec 12 18:15:49.436000 audit[4615]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd8d887d0 a2=94 a3=ffff items=0 ppid=4456 pid=4615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.436000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:15:49.436000 audit: BPF prog-id=193 op=UNLOAD Dec 12 18:15:49.436000 audit[4615]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcd8d887d0 a2=94 a3=ffff items=0 ppid=4456 pid=4615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.436000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:15:49.436000 audit: BPF prog-id=194 op=LOAD Dec 12 18:15:49.436000 audit[4615]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd8d88810 a2=94 a3=7ffcd8d889f0 items=0 ppid=4456 pid=4615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.436000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:15:49.436000 audit: BPF prog-id=194 op=UNLOAD Dec 12 18:15:49.436000 audit[4615]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcd8d88810 a2=94 a3=7ffcd8d889f0 items=0 ppid=4456 pid=4615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.436000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:15:49.494000 audit: BPF prog-id=195 op=LOAD Dec 12 18:15:49.494000 audit[4641]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc5aceabb0 a2=98 a3=0 items=0 ppid=4456 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.494000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:15:49.494000 audit: BPF prog-id=195 op=UNLOAD Dec 12 18:15:49.494000 audit[4641]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc5aceab80 a3=0 items=0 ppid=4456 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.494000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:15:49.494000 audit: BPF prog-id=196 op=LOAD Dec 12 18:15:49.494000 audit[4641]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc5acea9c0 a2=94 a3=54428f items=0 ppid=4456 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.494000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:15:49.494000 audit: BPF prog-id=196 op=UNLOAD Dec 12 18:15:49.494000 audit[4641]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc5acea9c0 a2=94 a3=54428f items=0 ppid=4456 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.494000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:15:49.494000 audit: BPF prog-id=197 op=LOAD Dec 12 18:15:49.494000 audit[4641]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc5acea9f0 a2=94 a3=2 items=0 ppid=4456 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.494000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:15:49.494000 audit: BPF prog-id=197 op=UNLOAD Dec 12 18:15:49.494000 audit[4641]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc5acea9f0 a2=0 a3=2 items=0 ppid=4456 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.494000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:15:49.494000 audit: BPF prog-id=198 op=LOAD Dec 12 18:15:49.494000 audit[4641]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc5acea7a0 a2=94 a3=4 items=0 ppid=4456 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.494000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:15:49.494000 audit: BPF prog-id=198 op=UNLOAD Dec 12 18:15:49.494000 audit[4641]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc5acea7a0 a2=94 a3=4 items=0 ppid=4456 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.494000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:15:49.494000 audit: BPF prog-id=199 op=LOAD Dec 12 18:15:49.494000 audit[4641]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc5acea8a0 a2=94 a3=7ffc5aceaa20 items=0 ppid=4456 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.494000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:15:49.494000 audit: BPF prog-id=199 op=UNLOAD Dec 12 18:15:49.494000 audit[4641]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc5acea8a0 a2=0 a3=7ffc5aceaa20 items=0 ppid=4456 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.494000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:15:49.495000 audit: BPF prog-id=200 op=LOAD Dec 12 18:15:49.495000 audit[4641]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc5ace9fd0 a2=94 a3=2 items=0 ppid=4456 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.495000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:15:49.495000 audit: BPF prog-id=200 op=UNLOAD Dec 12 18:15:49.495000 audit[4641]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc5ace9fd0 a2=0 a3=2 items=0 ppid=4456 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.495000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:15:49.495000 audit: BPF prog-id=201 op=LOAD Dec 12 18:15:49.495000 audit[4641]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc5acea0d0 a2=94 a3=30 items=0 ppid=4456 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.495000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:15:49.505451 systemd-networkd[1745]: vxlan.calico: Link UP Dec 12 18:15:49.505460 systemd-networkd[1745]: vxlan.calico: Gained carrier Dec 12 18:15:49.505000 audit: BPF prog-id=202 op=LOAD Dec 12 18:15:49.505000 audit[4659]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffde05b9670 a2=98 a3=0 items=0 ppid=4456 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.505000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:15:49.505000 audit: BPF prog-id=202 op=UNLOAD Dec 12 18:15:49.505000 audit[4659]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffde05b9640 a3=0 items=0 ppid=4456 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.505000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:15:49.506000 audit: BPF prog-id=203 op=LOAD Dec 12 18:15:49.506000 audit[4659]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffde05b9460 a2=94 a3=54428f items=0 ppid=4456 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.506000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:15:49.506000 audit: BPF prog-id=203 op=UNLOAD Dec 12 18:15:49.506000 audit[4659]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffde05b9460 a2=94 a3=54428f items=0 ppid=4456 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.506000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:15:49.506000 audit: BPF prog-id=204 op=LOAD Dec 12 18:15:49.506000 audit[4659]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffde05b9490 a2=94 a3=2 items=0 ppid=4456 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.506000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:15:49.506000 audit: BPF prog-id=204 op=UNLOAD Dec 12 18:15:49.506000 audit[4659]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffde05b9490 a2=0 a3=2 items=0 ppid=4456 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.506000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:15:49.620496 containerd[1848]: time="2025-12-12T18:15:49.620450370Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:15:49.622351 containerd[1848]: time="2025-12-12T18:15:49.622314181Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:15:49.622405 containerd[1848]: time="2025-12-12T18:15:49.622348388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:15:49.622583 kubelet[3114]: E1212 18:15:49.622545 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:15:49.622622 kubelet[3114]: E1212 18:15:49.622594 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:15:49.622754 kubelet[3114]: E1212 18:15:49.622722 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-svwkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cbd858cfd-bgn9w_calico-system(dc52e510-0fd6-4a21-9182-a1df4018bab8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:15:49.624162 kubelet[3114]: E1212 18:15:49.624078 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7cbd858cfd-bgn9w" podUID="dc52e510-0fd6-4a21-9182-a1df4018bab8" Dec 12 18:15:49.664000 audit: BPF prog-id=205 op=LOAD Dec 12 18:15:49.664000 audit[4659]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffde05b9350 a2=94 a3=1 items=0 ppid=4456 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.664000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:15:49.664000 audit: BPF prog-id=205 op=UNLOAD Dec 12 18:15:49.664000 audit[4659]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffde05b9350 a2=94 a3=1 items=0 ppid=4456 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.664000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:15:49.675000 audit: BPF prog-id=206 op=LOAD Dec 12 18:15:49.675000 audit[4659]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffde05b9340 a2=94 a3=4 items=0 ppid=4456 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.675000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:15:49.675000 audit: BPF prog-id=206 op=UNLOAD Dec 12 18:15:49.675000 audit[4659]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffde05b9340 a2=0 a3=4 items=0 ppid=4456 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.675000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:15:49.675000 audit: BPF prog-id=207 op=LOAD Dec 12 18:15:49.675000 audit[4659]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffde05b91a0 a2=94 a3=5 items=0 ppid=4456 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.675000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:15:49.675000 audit: BPF prog-id=207 op=UNLOAD Dec 12 18:15:49.675000 audit[4659]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffde05b91a0 a2=0 a3=5 items=0 ppid=4456 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.675000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:15:49.675000 audit: BPF prog-id=208 op=LOAD Dec 12 18:15:49.675000 audit[4659]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffde05b93c0 a2=94 a3=6 items=0 ppid=4456 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.675000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:15:49.675000 audit: BPF prog-id=208 op=UNLOAD Dec 12 18:15:49.675000 audit[4659]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffde05b93c0 a2=0 a3=6 items=0 ppid=4456 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.675000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:15:49.675000 audit: BPF prog-id=209 op=LOAD Dec 12 18:15:49.675000 audit[4659]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffde05b8b70 a2=94 a3=88 items=0 ppid=4456 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.675000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:15:49.676000 audit: BPF prog-id=210 op=LOAD Dec 12 18:15:49.676000 audit[4659]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffde05b89f0 a2=94 a3=2 items=0 ppid=4456 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.676000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:15:49.676000 audit: BPF prog-id=210 op=UNLOAD Dec 12 18:15:49.676000 audit[4659]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffde05b8a20 a2=0 a3=7ffde05b8b20 items=0 ppid=4456 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.676000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:15:49.676000 audit: BPF prog-id=209 op=UNLOAD Dec 12 18:15:49.676000 audit[4659]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=d9a9d10 a2=0 a3=e91db4405949f59e items=0 ppid=4456 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.676000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:15:49.695000 audit: BPF prog-id=201 op=UNLOAD Dec 12 18:15:49.695000 audit[4456]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c001238cc0 a2=0 a3=0 items=0 ppid=4445 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.695000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 12 18:15:49.748000 audit[4682]: NETFILTER_CFG table=nat:119 family=2 entries=15 op=nft_register_chain pid=4682 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:15:49.748000 audit[4682]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fffc031aea0 a2=0 a3=7fffc031ae8c items=0 ppid=4456 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.748000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:15:49.750000 audit[4685]: NETFILTER_CFG table=mangle:120 family=2 entries=16 op=nft_register_chain pid=4685 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:15:49.750000 audit[4685]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffff95300c0 a2=0 a3=7ffff95300ac items=0 ppid=4456 pid=4685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.750000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:15:49.756000 audit[4681]: NETFILTER_CFG table=raw:121 family=2 entries=21 op=nft_register_chain pid=4681 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:15:49.756000 audit[4681]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffde2a523f0 a2=0 a3=7ffde2a523dc items=0 ppid=4456 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.756000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:15:49.759000 audit[4683]: NETFILTER_CFG table=filter:122 family=2 entries=94 op=nft_register_chain pid=4683 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:15:49.759000 audit[4683]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffd703a7d50 a2=0 a3=7ffd703a7d3c items=0 ppid=4456 pid=4683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:49.759000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:15:50.267238 kubelet[3114]: I1212 18:15:50.267187 3114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18dc805b-80db-4bdb-aaad-98fdbaf9a934" path="/var/lib/kubelet/pods/18dc805b-80db-4bdb-aaad-98fdbaf9a934/volumes" Dec 12 18:15:50.363303 kubelet[3114]: E1212 18:15:50.363243 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7cbd858cfd-bgn9w" podUID="dc52e510-0fd6-4a21-9182-a1df4018bab8" Dec 12 18:15:50.384000 audit[4714]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4714 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:50.384000 audit[4714]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcec6ef360 a2=0 a3=7ffcec6ef34c items=0 ppid=3256 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:50.384000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:50.400000 audit[4714]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4714 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:50.400000 audit[4714]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcec6ef360 a2=0 a3=0 items=0 ppid=3256 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:50.400000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:50.518825 systemd-networkd[1745]: cali26a4c2f8b2a: Gained IPv6LL Dec 12 18:15:51.031006 systemd-networkd[1745]: vxlan.calico: Gained IPv6LL Dec 12 18:15:52.265339 containerd[1848]: time="2025-12-12T18:15:52.265278364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5447cc8774-bt5vn,Uid:e457a45d-7eaa-42e2-95fa-b7011451de77,Namespace:calico-system,Attempt:0,}" Dec 12 18:15:52.265904 containerd[1848]: time="2025-12-12T18:15:52.265882199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9klxk,Uid:4cc6cae7-6092-4840-b2c3-065b3bb220f3,Namespace:calico-system,Attempt:0,}" Dec 12 18:15:52.373320 systemd-networkd[1745]: cali0e8af65e0a0: Link UP Dec 12 18:15:52.373750 systemd-networkd[1745]: cali0e8af65e0a0: Gained carrier Dec 12 18:15:52.384035 containerd[1848]: 2025-12-12 18:15:52.317 [INFO][4736] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--e--14f87f00b0-k8s-csi--node--driver--9klxk-eth0 csi-node-driver- calico-system 4cc6cae7-6092-4840-b2c3-065b3bb220f3 709 0 2025-12-12 18:15:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4515-1-0-e-14f87f00b0 csi-node-driver-9klxk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0e8af65e0a0 [] [] }} ContainerID="1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee" Namespace="calico-system" Pod="csi-node-driver-9klxk" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-csi--node--driver--9klxk-" Dec 12 18:15:52.384035 containerd[1848]: 2025-12-12 18:15:52.317 [INFO][4736] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee" Namespace="calico-system" Pod="csi-node-driver-9klxk" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-csi--node--driver--9klxk-eth0" Dec 12 18:15:52.384035 containerd[1848]: 2025-12-12 18:15:52.337 [INFO][4766] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee" HandleID="k8s-pod-network.1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee" Workload="ci--4515--1--0--e--14f87f00b0-k8s-csi--node--driver--9klxk-eth0" Dec 12 18:15:52.384227 containerd[1848]: 2025-12-12 18:15:52.337 [INFO][4766] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee" HandleID="k8s-pod-network.1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee" Workload="ci--4515--1--0--e--14f87f00b0-k8s-csi--node--driver--9klxk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5080), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-e-14f87f00b0", "pod":"csi-node-driver-9klxk", "timestamp":"2025-12-12 18:15:52.337090901 +0000 UTC"}, Hostname:"ci-4515-1-0-e-14f87f00b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:15:52.384227 containerd[1848]: 2025-12-12 18:15:52.337 [INFO][4766] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:15:52.384227 containerd[1848]: 2025-12-12 18:15:52.337 [INFO][4766] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:15:52.384227 containerd[1848]: 2025-12-12 18:15:52.337 [INFO][4766] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-e-14f87f00b0' Dec 12 18:15:52.384227 containerd[1848]: 2025-12-12 18:15:52.345 [INFO][4766] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:52.384227 containerd[1848]: 2025-12-12 18:15:52.350 [INFO][4766] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:52.384227 containerd[1848]: 2025-12-12 18:15:52.353 [INFO][4766] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:52.384227 containerd[1848]: 2025-12-12 18:15:52.355 [INFO][4766] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:52.384227 containerd[1848]: 2025-12-12 18:15:52.357 [INFO][4766] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:52.384429 containerd[1848]: 2025-12-12 18:15:52.357 [INFO][4766] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:52.384429 containerd[1848]: 2025-12-12 18:15:52.358 [INFO][4766] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee Dec 12 18:15:52.384429 containerd[1848]: 2025-12-12 18:15:52.363 [INFO][4766] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:52.384429 containerd[1848]: 2025-12-12 18:15:52.370 [INFO][4766] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.130/26] block=192.168.15.128/26 handle="k8s-pod-network.1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:52.384429 containerd[1848]: 2025-12-12 18:15:52.370 [INFO][4766] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.130/26] handle="k8s-pod-network.1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:52.384429 containerd[1848]: 2025-12-12 18:15:52.370 [INFO][4766] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:15:52.384429 containerd[1848]: 2025-12-12 18:15:52.370 [INFO][4766] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.130/26] IPv6=[] ContainerID="1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee" HandleID="k8s-pod-network.1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee" Workload="ci--4515--1--0--e--14f87f00b0-k8s-csi--node--driver--9klxk-eth0" Dec 12 18:15:52.384557 containerd[1848]: 2025-12-12 18:15:52.371 [INFO][4736] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee" Namespace="calico-system" Pod="csi-node-driver-9klxk" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-csi--node--driver--9klxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--14f87f00b0-k8s-csi--node--driver--9klxk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4cc6cae7-6092-4840-b2c3-065b3bb220f3", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 15, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-14f87f00b0", ContainerID:"", Pod:"csi-node-driver-9klxk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0e8af65e0a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:15:52.384611 containerd[1848]: 2025-12-12 18:15:52.371 [INFO][4736] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.130/32] ContainerID="1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee" Namespace="calico-system" Pod="csi-node-driver-9klxk" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-csi--node--driver--9klxk-eth0" Dec 12 18:15:52.384611 containerd[1848]: 2025-12-12 18:15:52.371 [INFO][4736] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e8af65e0a0 ContainerID="1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee" Namespace="calico-system" Pod="csi-node-driver-9klxk" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-csi--node--driver--9klxk-eth0" Dec 12 18:15:52.384611 containerd[1848]: 2025-12-12 18:15:52.374 [INFO][4736] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee" Namespace="calico-system" Pod="csi-node-driver-9klxk" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-csi--node--driver--9klxk-eth0" Dec 12 18:15:52.384689 containerd[1848]: 2025-12-12 18:15:52.374 [INFO][4736] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee" Namespace="calico-system" Pod="csi-node-driver-9klxk" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-csi--node--driver--9klxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--14f87f00b0-k8s-csi--node--driver--9klxk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4cc6cae7-6092-4840-b2c3-065b3bb220f3", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 15, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-14f87f00b0", ContainerID:"1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee", Pod:"csi-node-driver-9klxk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0e8af65e0a0", MAC:"4a:42:e9:a0:1c:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:15:52.384750 containerd[1848]: 2025-12-12 18:15:52.382 [INFO][4736] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee" Namespace="calico-system" Pod="csi-node-driver-9klxk" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-csi--node--driver--9klxk-eth0" Dec 12 18:15:52.392000 audit[4794]: NETFILTER_CFG table=filter:125 family=2 entries=36 op=nft_register_chain pid=4794 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:15:52.392000 audit[4794]: SYSCALL arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7fff3c6bf760 a2=0 a3=7fff3c6bf74c items=0 ppid=4456 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:52.392000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:15:52.409353 containerd[1848]: time="2025-12-12T18:15:52.409037195Z" level=info msg="connecting to shim 1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee" address="unix:///run/containerd/s/7358baf3e182013723fda26e2971a0d885d856e3df494fb3ac2b93c4b0b07ce7" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:15:52.433873 systemd[1]: Started cri-containerd-1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee.scope - libcontainer container 1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee. Dec 12 18:15:52.441000 audit: BPF prog-id=211 op=LOAD Dec 12 18:15:52.442000 audit: BPF prog-id=212 op=LOAD Dec 12 18:15:52.442000 audit[4814]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4803 pid=4814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:52.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135383362373934363862653966303931306365616639366161643662 Dec 12 18:15:52.442000 audit: BPF prog-id=212 op=UNLOAD Dec 12 18:15:52.442000 audit[4814]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4803 pid=4814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:52.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135383362373934363862653966303931306365616639366161643662 Dec 12 18:15:52.442000 audit: BPF prog-id=213 op=LOAD Dec 12 18:15:52.442000 audit[4814]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4803 pid=4814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:52.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135383362373934363862653966303931306365616639366161643662 Dec 12 18:15:52.442000 audit: BPF prog-id=214 op=LOAD Dec 12 18:15:52.442000 audit[4814]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4803 pid=4814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:52.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135383362373934363862653966303931306365616639366161643662 Dec 12 18:15:52.442000 audit: BPF prog-id=214 op=UNLOAD Dec 12 18:15:52.442000 audit[4814]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4803 pid=4814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:52.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135383362373934363862653966303931306365616639366161643662 Dec 12 18:15:52.442000 audit: BPF prog-id=213 op=UNLOAD Dec 12 18:15:52.442000 audit[4814]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4803 pid=4814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:52.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135383362373934363862653966303931306365616639366161643662 Dec 12 18:15:52.442000 audit: BPF prog-id=215 op=LOAD Dec 12 18:15:52.442000 audit[4814]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4803 pid=4814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:52.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135383362373934363862653966303931306365616639366161643662 Dec 12 18:15:52.457904 containerd[1848]: time="2025-12-12T18:15:52.457851514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9klxk,Uid:4cc6cae7-6092-4840-b2c3-065b3bb220f3,Namespace:calico-system,Attempt:0,} returns sandbox id \"1583b79468be9f0910ceaf96aad6bb1fac0ab6376d70a4ee7bd3e37ac04072ee\"" Dec 12 18:15:52.459726 containerd[1848]: time="2025-12-12T18:15:52.459373139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:15:52.478683 systemd-networkd[1745]: cali58b7417707b: Link UP Dec 12 18:15:52.479098 systemd-networkd[1745]: cali58b7417707b: Gained carrier Dec 12 18:15:52.491442 containerd[1848]: 2025-12-12 18:15:52.314 [INFO][4728] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--e--14f87f00b0-k8s-calico--kube--controllers--5447cc8774--bt5vn-eth0 calico-kube-controllers-5447cc8774- calico-system e457a45d-7eaa-42e2-95fa-b7011451de77 821 0 2025-12-12 18:15:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5447cc8774 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4515-1-0-e-14f87f00b0 calico-kube-controllers-5447cc8774-bt5vn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali58b7417707b [] [] }} ContainerID="2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c" Namespace="calico-system" Pod="calico-kube-controllers-5447cc8774-bt5vn" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-calico--kube--controllers--5447cc8774--bt5vn-" Dec 12 18:15:52.491442 containerd[1848]: 2025-12-12 18:15:52.314 [INFO][4728] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c" Namespace="calico-system" Pod="calico-kube-controllers-5447cc8774-bt5vn" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-calico--kube--controllers--5447cc8774--bt5vn-eth0" Dec 12 18:15:52.491442 containerd[1848]: 2025-12-12 18:15:52.351 [INFO][4764] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c" HandleID="k8s-pod-network.2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c" Workload="ci--4515--1--0--e--14f87f00b0-k8s-calico--kube--controllers--5447cc8774--bt5vn-eth0" Dec 12 18:15:52.491642 containerd[1848]: 2025-12-12 18:15:52.351 [INFO][4764] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c" HandleID="k8s-pod-network.2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c" Workload="ci--4515--1--0--e--14f87f00b0-k8s-calico--kube--controllers--5447cc8774--bt5vn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001393f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-e-14f87f00b0", "pod":"calico-kube-controllers-5447cc8774-bt5vn", "timestamp":"2025-12-12 18:15:52.351206927 +0000 UTC"}, Hostname:"ci-4515-1-0-e-14f87f00b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:15:52.491642 containerd[1848]: 2025-12-12 18:15:52.351 [INFO][4764] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:15:52.491642 containerd[1848]: 2025-12-12 18:15:52.370 [INFO][4764] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:15:52.491642 containerd[1848]: 2025-12-12 18:15:52.370 [INFO][4764] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-e-14f87f00b0' Dec 12 18:15:52.491642 containerd[1848]: 2025-12-12 18:15:52.446 [INFO][4764] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:52.491642 containerd[1848]: 2025-12-12 18:15:52.452 [INFO][4764] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:52.491642 containerd[1848]: 2025-12-12 18:15:52.456 [INFO][4764] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:52.491642 containerd[1848]: 2025-12-12 18:15:52.458 [INFO][4764] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:52.491642 containerd[1848]: 2025-12-12 18:15:52.460 [INFO][4764] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:52.491922 containerd[1848]: 2025-12-12 18:15:52.460 [INFO][4764] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:52.491922 containerd[1848]: 2025-12-12 18:15:52.465 [INFO][4764] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c Dec 12 18:15:52.491922 containerd[1848]: 2025-12-12 18:15:52.469 [INFO][4764] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:52.491922 containerd[1848]: 2025-12-12 18:15:52.475 [INFO][4764] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.131/26] block=192.168.15.128/26 handle="k8s-pod-network.2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:52.491922 containerd[1848]: 2025-12-12 18:15:52.475 [INFO][4764] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.131/26] handle="k8s-pod-network.2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:52.491922 containerd[1848]: 2025-12-12 18:15:52.475 [INFO][4764] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:15:52.491922 containerd[1848]: 2025-12-12 18:15:52.475 [INFO][4764] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.131/26] IPv6=[] ContainerID="2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c" HandleID="k8s-pod-network.2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c" Workload="ci--4515--1--0--e--14f87f00b0-k8s-calico--kube--controllers--5447cc8774--bt5vn-eth0" Dec 12 18:15:52.492058 containerd[1848]: 2025-12-12 18:15:52.477 [INFO][4728] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c" Namespace="calico-system" Pod="calico-kube-controllers-5447cc8774-bt5vn" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-calico--kube--controllers--5447cc8774--bt5vn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--14f87f00b0-k8s-calico--kube--controllers--5447cc8774--bt5vn-eth0", GenerateName:"calico-kube-controllers-5447cc8774-", Namespace:"calico-system", SelfLink:"", UID:"e457a45d-7eaa-42e2-95fa-b7011451de77", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 15, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5447cc8774", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-14f87f00b0", ContainerID:"", Pod:"calico-kube-controllers-5447cc8774-bt5vn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali58b7417707b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:15:52.492111 containerd[1848]: 2025-12-12 18:15:52.477 [INFO][4728] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.131/32] ContainerID="2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c" Namespace="calico-system" Pod="calico-kube-controllers-5447cc8774-bt5vn" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-calico--kube--controllers--5447cc8774--bt5vn-eth0" Dec 12 18:15:52.492111 containerd[1848]: 2025-12-12 18:15:52.477 [INFO][4728] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58b7417707b ContainerID="2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c" Namespace="calico-system" Pod="calico-kube-controllers-5447cc8774-bt5vn" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-calico--kube--controllers--5447cc8774--bt5vn-eth0" Dec 12 18:15:52.492111 containerd[1848]: 2025-12-12 18:15:52.478 [INFO][4728] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c" Namespace="calico-system" Pod="calico-kube-controllers-5447cc8774-bt5vn" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-calico--kube--controllers--5447cc8774--bt5vn-eth0" Dec 12 18:15:52.492171 containerd[1848]: 2025-12-12 18:15:52.480 [INFO][4728] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c" Namespace="calico-system" Pod="calico-kube-controllers-5447cc8774-bt5vn" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-calico--kube--controllers--5447cc8774--bt5vn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--14f87f00b0-k8s-calico--kube--controllers--5447cc8774--bt5vn-eth0", GenerateName:"calico-kube-controllers-5447cc8774-", Namespace:"calico-system", SelfLink:"", UID:"e457a45d-7eaa-42e2-95fa-b7011451de77", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 15, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5447cc8774", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-14f87f00b0", ContainerID:"2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c", Pod:"calico-kube-controllers-5447cc8774-bt5vn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali58b7417707b", MAC:"0e:8c:27:46:65:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:15:52.492221 containerd[1848]: 2025-12-12 18:15:52.489 [INFO][4728] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c" Namespace="calico-system" Pod="calico-kube-controllers-5447cc8774-bt5vn" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-calico--kube--controllers--5447cc8774--bt5vn-eth0" Dec 12 18:15:52.501000 audit[4849]: NETFILTER_CFG table=filter:126 family=2 entries=40 op=nft_register_chain pid=4849 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:15:52.501000 audit[4849]: SYSCALL arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7fffe4961a30 a2=0 a3=7fffe4961a1c items=0 ppid=4456 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:52.501000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:15:52.517844 containerd[1848]: time="2025-12-12T18:15:52.517747576Z" level=info msg="connecting to shim 2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c" address="unix:///run/containerd/s/736a922aec1ee80e6bfe0a463e483b0738a28d9ddff69af81e36fb81aa546cfa" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:15:52.548919 systemd[1]: Started cri-containerd-2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c.scope - libcontainer container 2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c. Dec 12 18:15:52.557000 audit: BPF prog-id=216 op=LOAD Dec 12 18:15:52.557000 audit: BPF prog-id=217 op=LOAD Dec 12 18:15:52.557000 audit[4868]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4859 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:52.557000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266643436303164666335633361303565333361656534633338333237 Dec 12 18:15:52.558000 audit: BPF prog-id=217 op=UNLOAD Dec 12 18:15:52.558000 audit[4868]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4859 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:52.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266643436303164666335633361303565333361656534633338333237 Dec 12 18:15:52.558000 audit: BPF prog-id=218 op=LOAD Dec 12 18:15:52.558000 audit[4868]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4859 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:52.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266643436303164666335633361303565333361656534633338333237 Dec 12 18:15:52.558000 audit: BPF prog-id=219 op=LOAD Dec 12 18:15:52.558000 audit[4868]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4859 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:52.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266643436303164666335633361303565333361656534633338333237 Dec 12 18:15:52.558000 audit: BPF prog-id=219 op=UNLOAD Dec 12 18:15:52.558000 audit[4868]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4859 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:52.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266643436303164666335633361303565333361656534633338333237 Dec 12 18:15:52.558000 audit: BPF prog-id=218 op=UNLOAD Dec 12 18:15:52.558000 audit[4868]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4859 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:52.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266643436303164666335633361303565333361656534633338333237 Dec 12 18:15:52.558000 audit: BPF prog-id=220 op=LOAD Dec 12 18:15:52.558000 audit[4868]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4859 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:52.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266643436303164666335633361303565333361656534633338333237 Dec 12 18:15:52.591108 containerd[1848]: time="2025-12-12T18:15:52.591072332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5447cc8774-bt5vn,Uid:e457a45d-7eaa-42e2-95fa-b7011451de77,Namespace:calico-system,Attempt:0,} returns sandbox id \"2fd4601dfc5c3a05e33aee4c38327a50447e3671cba74dea697c131868c5b00c\"" Dec 12 18:15:52.781844 containerd[1848]: time="2025-12-12T18:15:52.781518401Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:15:52.784078 containerd[1848]: time="2025-12-12T18:15:52.784014452Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:15:52.784078 containerd[1848]: time="2025-12-12T18:15:52.784064726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:15:52.784274 kubelet[3114]: E1212 18:15:52.784231 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:15:52.784613 kubelet[3114]: E1212 18:15:52.784278 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:15:52.784613 kubelet[3114]: E1212 18:15:52.784482 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nsbhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9klxk_calico-system(4cc6cae7-6092-4840-b2c3-065b3bb220f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:15:52.784768 containerd[1848]: time="2025-12-12T18:15:52.784562010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:15:53.113752 containerd[1848]: time="2025-12-12T18:15:53.113615126Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:15:53.115766 containerd[1848]: time="2025-12-12T18:15:53.115708735Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:15:53.115856 containerd[1848]: time="2025-12-12T18:15:53.115771745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:15:53.115964 kubelet[3114]: E1212 18:15:53.115925 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:15:53.116009 kubelet[3114]: E1212 18:15:53.115971 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:15:53.116285 containerd[1848]: time="2025-12-12T18:15:53.116230914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:15:53.116384 kubelet[3114]: E1212 18:15:53.116309 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2cfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5447cc8774-bt5vn_calico-system(e457a45d-7eaa-42e2-95fa-b7011451de77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:15:53.117506 kubelet[3114]: E1212 18:15:53.117461 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5447cc8774-bt5vn" podUID="e457a45d-7eaa-42e2-95fa-b7011451de77" Dec 12 18:15:53.266181 containerd[1848]: time="2025-12-12T18:15:53.265838919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646c8584fc-2p5wd,Uid:d5affb4a-a5c2-4140-9517-3b93721ff225,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:15:53.266181 containerd[1848]: time="2025-12-12T18:15:53.265874358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4vrl2,Uid:a83fad8b-d566-4d95-b74e-3a16ee22e614,Namespace:calico-system,Attempt:0,}" Dec 12 18:15:53.364891 systemd-networkd[1745]: cali7410f41dadf: Link UP Dec 12 18:15:53.365197 systemd-networkd[1745]: cali7410f41dadf: Gained carrier Dec 12 18:15:53.373136 kubelet[3114]: E1212 18:15:53.373091 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5447cc8774-bt5vn" podUID="e457a45d-7eaa-42e2-95fa-b7011451de77" Dec 12 18:15:53.380039 containerd[1848]: 2025-12-12 18:15:53.305 [INFO][4895] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--2p5wd-eth0 calico-apiserver-646c8584fc- calico-apiserver d5affb4a-a5c2-4140-9517-3b93721ff225 818 0 2025-12-12 18:15:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:646c8584fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515-1-0-e-14f87f00b0 calico-apiserver-646c8584fc-2p5wd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7410f41dadf [] [] }} ContainerID="2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e" Namespace="calico-apiserver" Pod="calico-apiserver-646c8584fc-2p5wd" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--2p5wd-" Dec 12 18:15:53.380039 containerd[1848]: 2025-12-12 18:15:53.305 [INFO][4895] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e" Namespace="calico-apiserver" Pod="calico-apiserver-646c8584fc-2p5wd" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--2p5wd-eth0" Dec 12 18:15:53.380039 containerd[1848]: 2025-12-12 18:15:53.328 [INFO][4930] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e" HandleID="k8s-pod-network.2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e" Workload="ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--2p5wd-eth0" Dec 12 18:15:53.380264 containerd[1848]: 2025-12-12 18:15:53.328 [INFO][4930] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e" HandleID="k8s-pod-network.2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e" Workload="ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--2p5wd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5650), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515-1-0-e-14f87f00b0", "pod":"calico-apiserver-646c8584fc-2p5wd", "timestamp":"2025-12-12 18:15:53.32868237 +0000 UTC"}, Hostname:"ci-4515-1-0-e-14f87f00b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:15:53.380264 containerd[1848]: 2025-12-12 18:15:53.328 [INFO][4930] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:15:53.380264 containerd[1848]: 2025-12-12 18:15:53.328 [INFO][4930] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:15:53.380264 containerd[1848]: 2025-12-12 18:15:53.328 [INFO][4930] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-e-14f87f00b0' Dec 12 18:15:53.380264 containerd[1848]: 2025-12-12 18:15:53.336 [INFO][4930] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:53.380264 containerd[1848]: 2025-12-12 18:15:53.340 [INFO][4930] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:53.380264 containerd[1848]: 2025-12-12 18:15:53.345 [INFO][4930] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:53.380264 containerd[1848]: 2025-12-12 18:15:53.347 [INFO][4930] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:53.380264 containerd[1848]: 2025-12-12 18:15:53.349 [INFO][4930] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:53.380486 containerd[1848]: 2025-12-12 18:15:53.349 [INFO][4930] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:53.380486 containerd[1848]: 2025-12-12 18:15:53.350 [INFO][4930] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e Dec 12 18:15:53.380486 containerd[1848]: 2025-12-12 18:15:53.354 [INFO][4930] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:53.380486 containerd[1848]: 2025-12-12 18:15:53.360 [INFO][4930] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.132/26] block=192.168.15.128/26 handle="k8s-pod-network.2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:53.380486 containerd[1848]: 2025-12-12 18:15:53.361 [INFO][4930] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.132/26] handle="k8s-pod-network.2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:53.380486 containerd[1848]: 2025-12-12 18:15:53.361 [INFO][4930] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:15:53.380486 containerd[1848]: 2025-12-12 18:15:53.361 [INFO][4930] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.132/26] IPv6=[] ContainerID="2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e" HandleID="k8s-pod-network.2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e" Workload="ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--2p5wd-eth0" Dec 12 18:15:53.380618 containerd[1848]: 2025-12-12 18:15:53.362 [INFO][4895] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e" Namespace="calico-apiserver" Pod="calico-apiserver-646c8584fc-2p5wd" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--2p5wd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--2p5wd-eth0", GenerateName:"calico-apiserver-646c8584fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"d5affb4a-a5c2-4140-9517-3b93721ff225", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 15, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"646c8584fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-14f87f00b0", ContainerID:"", Pod:"calico-apiserver-646c8584fc-2p5wd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7410f41dadf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:15:53.380680 containerd[1848]: 2025-12-12 18:15:53.362 [INFO][4895] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.132/32] ContainerID="2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e" Namespace="calico-apiserver" Pod="calico-apiserver-646c8584fc-2p5wd" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--2p5wd-eth0" Dec 12 18:15:53.380680 containerd[1848]: 2025-12-12 18:15:53.362 [INFO][4895] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7410f41dadf ContainerID="2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e" Namespace="calico-apiserver" Pod="calico-apiserver-646c8584fc-2p5wd" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--2p5wd-eth0" Dec 12 18:15:53.380680 containerd[1848]: 2025-12-12 18:15:53.365 [INFO][4895] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e" Namespace="calico-apiserver" Pod="calico-apiserver-646c8584fc-2p5wd" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--2p5wd-eth0" Dec 12 18:15:53.381387 containerd[1848]: 2025-12-12 18:15:53.365 [INFO][4895] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e" Namespace="calico-apiserver" Pod="calico-apiserver-646c8584fc-2p5wd" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--2p5wd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--2p5wd-eth0", GenerateName:"calico-apiserver-646c8584fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"d5affb4a-a5c2-4140-9517-3b93721ff225", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 15, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"646c8584fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-14f87f00b0", ContainerID:"2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e", Pod:"calico-apiserver-646c8584fc-2p5wd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7410f41dadf", MAC:"9e:ec:71:d8:54:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:15:53.381783 containerd[1848]: 2025-12-12 18:15:53.377 [INFO][4895] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e" Namespace="calico-apiserver" Pod="calico-apiserver-646c8584fc-2p5wd" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--2p5wd-eth0" Dec 12 18:15:53.391000 audit[4965]: NETFILTER_CFG table=filter:127 family=2 entries=58 op=nft_register_chain pid=4965 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:15:53.393213 kernel: kauditd_printk_skb: 281 callbacks suppressed Dec 12 18:15:53.393269 kernel: audit: type=1325 audit(1765563353.391:672): table=filter:127 family=2 entries=58 op=nft_register_chain pid=4965 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:15:53.391000 audit[4965]: SYSCALL arch=c000003e syscall=46 success=yes exit=30584 a0=3 a1=7fffeeaf6a30 a2=0 a3=7fffeeaf6a1c items=0 ppid=4456 pid=4965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:53.396654 kernel: audit: type=1300 audit(1765563353.391:672): arch=c000003e syscall=46 success=yes exit=30584 a0=3 a1=7fffeeaf6a30 a2=0 a3=7fffeeaf6a1c items=0 ppid=4456 pid=4965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:53.391000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:15:53.400774 kernel: audit: type=1327 audit(1765563353.391:672): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:15:53.409940 containerd[1848]: time="2025-12-12T18:15:53.409514648Z" level=info msg="connecting to shim 2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e" address="unix:///run/containerd/s/a08af0406c86bb77f104deb63445a83e3d63aa12fc2d869377b778c14b6dc10f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:15:53.436926 systemd[1]: Started cri-containerd-2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e.scope - libcontainer container 2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e. Dec 12 18:15:53.446000 audit: BPF prog-id=221 op=LOAD Dec 12 18:15:53.446000 audit: BPF prog-id=222 op=LOAD Dec 12 18:15:53.449162 kernel: audit: type=1334 audit(1765563353.446:673): prog-id=221 op=LOAD Dec 12 18:15:53.449237 kernel: audit: type=1334 audit(1765563353.446:674): prog-id=222 op=LOAD Dec 12 18:15:53.446000 audit[4985]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4974 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:53.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263366664346233633662333032646335633164643531343737343030 Dec 12 18:15:53.454895 kernel: audit: type=1300 audit(1765563353.446:674): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4974 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:53.454959 kernel: audit: type=1327 audit(1765563353.446:674): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263366664346233633662333032646335633164643531343737343030 Dec 12 18:15:53.446000 audit: BPF prog-id=222 op=UNLOAD Dec 12 18:15:53.458112 kernel: audit: type=1334 audit(1765563353.446:675): prog-id=222 op=UNLOAD Dec 12 18:15:53.446000 audit[4985]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4974 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:53.460261 kernel: audit: type=1300 audit(1765563353.446:675): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4974 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:53.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263366664346233633662333032646335633164643531343737343030 Dec 12 18:15:53.464337 kernel: audit: type=1327 audit(1765563353.446:675): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263366664346233633662333032646335633164643531343737343030 Dec 12 18:15:53.464537 containerd[1848]: time="2025-12-12T18:15:53.464495625Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:15:53.466426 containerd[1848]: time="2025-12-12T18:15:53.466380927Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:15:53.466522 containerd[1848]: time="2025-12-12T18:15:53.466441254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:15:53.466684 kubelet[3114]: E1212 18:15:53.466627 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:15:53.466764 kubelet[3114]: E1212 18:15:53.466689 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:15:53.466856 kubelet[3114]: E1212 18:15:53.466817 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nsbhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9klxk_calico-system(4cc6cae7-6092-4840-b2c3-065b3bb220f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:15:53.446000 audit: BPF prog-id=223 op=LOAD Dec 12 18:15:53.446000 audit[4985]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4974 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:53.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263366664346233633662333032646335633164643531343737343030 Dec 12 18:15:53.446000 audit: BPF prog-id=224 op=LOAD Dec 12 18:15:53.446000 audit[4985]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4974 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:53.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263366664346233633662333032646335633164643531343737343030 Dec 12 18:15:53.446000 audit: BPF prog-id=224 op=UNLOAD Dec 12 18:15:53.446000 audit[4985]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4974 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:53.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263366664346233633662333032646335633164643531343737343030 Dec 12 18:15:53.446000 audit: BPF prog-id=223 op=UNLOAD Dec 12 18:15:53.446000 audit[4985]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4974 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:53.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263366664346233633662333032646335633164643531343737343030 Dec 12 18:15:53.446000 audit: BPF prog-id=225 op=LOAD Dec 12 18:15:53.446000 audit[4985]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4974 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:53.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263366664346233633662333032646335633164643531343737343030 Dec 12 18:15:53.468120 kubelet[3114]: E1212 18:15:53.468025 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9klxk" podUID="4cc6cae7-6092-4840-b2c3-065b3bb220f3" Dec 12 18:15:53.470822 systemd-networkd[1745]: calib261a2a467b: Link UP Dec 12 18:15:53.471155 systemd-networkd[1745]: calib261a2a467b: Gained carrier Dec 12 18:15:53.481272 containerd[1848]: 2025-12-12 18:15:53.316 [INFO][4904] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--e--14f87f00b0-k8s-goldmane--666569f655--4vrl2-eth0 goldmane-666569f655- calico-system a83fad8b-d566-4d95-b74e-3a16ee22e614 822 0 2025-12-12 18:15:28 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4515-1-0-e-14f87f00b0 goldmane-666569f655-4vrl2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib261a2a467b [] [] }} ContainerID="3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea" Namespace="calico-system" Pod="goldmane-666569f655-4vrl2" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-goldmane--666569f655--4vrl2-" Dec 12 18:15:53.481272 containerd[1848]: 2025-12-12 18:15:53.316 [INFO][4904] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea" Namespace="calico-system" Pod="goldmane-666569f655-4vrl2" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-goldmane--666569f655--4vrl2-eth0" Dec 12 18:15:53.481272 containerd[1848]: 2025-12-12 18:15:53.341 [INFO][4940] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea" HandleID="k8s-pod-network.3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea" Workload="ci--4515--1--0--e--14f87f00b0-k8s-goldmane--666569f655--4vrl2-eth0" Dec 12 18:15:53.481436 containerd[1848]: 2025-12-12 18:15:53.341 [INFO][4940] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea" HandleID="k8s-pod-network.3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea" Workload="ci--4515--1--0--e--14f87f00b0-k8s-goldmane--666569f655--4vrl2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-e-14f87f00b0", "pod":"goldmane-666569f655-4vrl2", "timestamp":"2025-12-12 18:15:53.341213962 +0000 UTC"}, Hostname:"ci-4515-1-0-e-14f87f00b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:15:53.481436 containerd[1848]: 2025-12-12 18:15:53.341 [INFO][4940] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:15:53.481436 containerd[1848]: 2025-12-12 18:15:53.361 [INFO][4940] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:15:53.481436 containerd[1848]: 2025-12-12 18:15:53.361 [INFO][4940] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-e-14f87f00b0' Dec 12 18:15:53.481436 containerd[1848]: 2025-12-12 18:15:53.435 [INFO][4940] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:53.481436 containerd[1848]: 2025-12-12 18:15:53.441 [INFO][4940] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:53.481436 containerd[1848]: 2025-12-12 18:15:53.449 [INFO][4940] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:53.481436 containerd[1848]: 2025-12-12 18:15:53.452 [INFO][4940] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:53.481436 containerd[1848]: 2025-12-12 18:15:53.455 [INFO][4940] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:53.481640 containerd[1848]: 2025-12-12 18:15:53.455 [INFO][4940] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:53.481640 containerd[1848]: 2025-12-12 18:15:53.456 [INFO][4940] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea Dec 12 18:15:53.481640 containerd[1848]: 2025-12-12 18:15:53.461 [INFO][4940] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:53.481640 containerd[1848]: 2025-12-12 18:15:53.467 [INFO][4940] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.133/26] block=192.168.15.128/26 handle="k8s-pod-network.3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:53.481640 containerd[1848]: 2025-12-12 18:15:53.467 [INFO][4940] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.133/26] handle="k8s-pod-network.3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:53.481640 containerd[1848]: 2025-12-12 18:15:53.467 [INFO][4940] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:15:53.481640 containerd[1848]: 2025-12-12 18:15:53.467 [INFO][4940] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.133/26] IPv6=[] ContainerID="3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea" HandleID="k8s-pod-network.3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea" Workload="ci--4515--1--0--e--14f87f00b0-k8s-goldmane--666569f655--4vrl2-eth0" Dec 12 18:15:53.481791 containerd[1848]: 2025-12-12 18:15:53.469 [INFO][4904] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea" Namespace="calico-system" Pod="goldmane-666569f655-4vrl2" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-goldmane--666569f655--4vrl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--14f87f00b0-k8s-goldmane--666569f655--4vrl2-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a83fad8b-d566-4d95-b74e-3a16ee22e614", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 15, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-14f87f00b0", ContainerID:"", Pod:"goldmane-666569f655-4vrl2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib261a2a467b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:15:53.481846 containerd[1848]: 2025-12-12 18:15:53.469 [INFO][4904] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.133/32] ContainerID="3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea" Namespace="calico-system" Pod="goldmane-666569f655-4vrl2" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-goldmane--666569f655--4vrl2-eth0" Dec 12 18:15:53.481846 containerd[1848]: 2025-12-12 18:15:53.469 [INFO][4904] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib261a2a467b ContainerID="3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea" Namespace="calico-system" Pod="goldmane-666569f655-4vrl2" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-goldmane--666569f655--4vrl2-eth0" Dec 12 18:15:53.481846 containerd[1848]: 2025-12-12 18:15:53.471 [INFO][4904] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea" Namespace="calico-system" Pod="goldmane-666569f655-4vrl2" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-goldmane--666569f655--4vrl2-eth0" Dec 12 18:15:53.481906 containerd[1848]: 2025-12-12 18:15:53.471 [INFO][4904] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea" Namespace="calico-system" Pod="goldmane-666569f655-4vrl2" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-goldmane--666569f655--4vrl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--14f87f00b0-k8s-goldmane--666569f655--4vrl2-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a83fad8b-d566-4d95-b74e-3a16ee22e614", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 15, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-14f87f00b0", ContainerID:"3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea", Pod:"goldmane-666569f655-4vrl2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib261a2a467b", MAC:"2a:a6:97:ed:6b:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:15:53.481955 containerd[1848]: 2025-12-12 18:15:53.478 [INFO][4904] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea" Namespace="calico-system" Pod="goldmane-666569f655-4vrl2" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-goldmane--666569f655--4vrl2-eth0" Dec 12 18:15:53.488244 containerd[1848]: time="2025-12-12T18:15:53.488186985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646c8584fc-2p5wd,Uid:d5affb4a-a5c2-4140-9517-3b93721ff225,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2c6fd4b3c6b302dc5c1dd51477400cfed1e4a667f40e50d90f581822dbe0d99e\"" Dec 12 18:15:53.489682 containerd[1848]: time="2025-12-12T18:15:53.489626359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:15:53.497000 audit[5020]: NETFILTER_CFG table=filter:128 family=2 entries=56 op=nft_register_chain pid=5020 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:15:53.497000 audit[5020]: SYSCALL arch=c000003e syscall=46 success=yes exit=28744 a0=3 a1=7ffefa761680 a2=0 a3=7ffefa76166c items=0 ppid=4456 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:53.497000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:15:53.511109 containerd[1848]: time="2025-12-12T18:15:53.511062216Z" level=info msg="connecting to shim 3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea" address="unix:///run/containerd/s/69a7405278aedd761db3e27edc3a745bb607baf7e118f14961837d7ccf62eac7" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:15:53.541901 systemd[1]: Started cri-containerd-3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea.scope - libcontainer container 3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea. Dec 12 18:15:53.550000 audit: BPF prog-id=226 op=LOAD Dec 12 18:15:53.550000 audit: BPF prog-id=227 op=LOAD Dec 12 18:15:53.550000 audit[5040]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5029 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:53.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335323865613231393038633236353461353466393939363231333535 Dec 12 18:15:53.550000 audit: BPF prog-id=227 op=UNLOAD Dec 12 18:15:53.550000 audit[5040]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5029 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:53.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335323865613231393038633236353461353466393939363231333535 Dec 12 18:15:53.550000 audit: BPF prog-id=228 op=LOAD Dec 12 18:15:53.550000 audit[5040]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5029 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:53.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335323865613231393038633236353461353466393939363231333535 Dec 12 18:15:53.550000 audit: BPF prog-id=229 op=LOAD Dec 12 18:15:53.550000 audit[5040]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5029 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:53.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335323865613231393038633236353461353466393939363231333535 Dec 12 18:15:53.550000 audit: BPF prog-id=229 op=UNLOAD Dec 12 18:15:53.550000 audit[5040]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5029 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:53.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335323865613231393038633236353461353466393939363231333535 Dec 12 18:15:53.550000 audit: BPF prog-id=228 op=UNLOAD Dec 12 18:15:53.550000 audit[5040]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5029 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:53.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335323865613231393038633236353461353466393939363231333535 Dec 12 18:15:53.550000 audit: BPF prog-id=230 op=LOAD Dec 12 18:15:53.550000 audit[5040]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5029 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:53.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335323865613231393038633236353461353466393939363231333535 Dec 12 18:15:53.588543 containerd[1848]: time="2025-12-12T18:15:53.588504188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4vrl2,Uid:a83fad8b-d566-4d95-b74e-3a16ee22e614,Namespace:calico-system,Attempt:0,} returns sandbox id \"3528ea21908c2654a54f999621355792232151d0527b75383563c62d406549ea\"" Dec 12 18:15:53.654865 systemd-networkd[1745]: cali58b7417707b: Gained IPv6LL Dec 12 18:15:53.782908 systemd-networkd[1745]: cali0e8af65e0a0: Gained IPv6LL Dec 12 18:15:53.824212 containerd[1848]: time="2025-12-12T18:15:53.824136558Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:15:53.826594 containerd[1848]: time="2025-12-12T18:15:53.826394767Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:15:53.826594 containerd[1848]: time="2025-12-12T18:15:53.826508635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:15:53.826849 kubelet[3114]: E1212 18:15:53.826654 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:15:53.826849 kubelet[3114]: E1212 18:15:53.826725 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:15:53.827223 containerd[1848]: time="2025-12-12T18:15:53.827018800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:15:53.827704 kubelet[3114]: E1212 18:15:53.827269 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8hc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-646c8584fc-2p5wd_calico-apiserver(d5affb4a-a5c2-4140-9517-3b93721ff225): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:15:53.828551 kubelet[3114]: E1212 18:15:53.828513 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-2p5wd" podUID="d5affb4a-a5c2-4140-9517-3b93721ff225" Dec 12 18:15:54.156170 containerd[1848]: time="2025-12-12T18:15:54.156100913Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:15:54.158132 containerd[1848]: time="2025-12-12T18:15:54.158086187Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:15:54.158201 containerd[1848]: time="2025-12-12T18:15:54.158115411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:15:54.158330 kubelet[3114]: E1212 18:15:54.158286 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:15:54.158390 kubelet[3114]: E1212 18:15:54.158350 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:15:54.158529 kubelet[3114]: E1212 18:15:54.158484 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wxhcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4vrl2_calico-system(a83fad8b-d566-4d95-b74e-3a16ee22e614): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:15:54.159704 kubelet[3114]: E1212 18:15:54.159654 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vrl2" podUID="a83fad8b-d566-4d95-b74e-3a16ee22e614" Dec 12 18:15:54.372439 kubelet[3114]: E1212 18:15:54.372366 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-2p5wd" podUID="d5affb4a-a5c2-4140-9517-3b93721ff225" Dec 12 18:15:54.373568 kubelet[3114]: E1212 18:15:54.373523 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5447cc8774-bt5vn" podUID="e457a45d-7eaa-42e2-95fa-b7011451de77" Dec 12 18:15:54.374185 kubelet[3114]: E1212 18:15:54.374160 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vrl2" podUID="a83fad8b-d566-4d95-b74e-3a16ee22e614" Dec 12 18:15:54.374911 kubelet[3114]: E1212 18:15:54.374673 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9klxk" podUID="4cc6cae7-6092-4840-b2c3-065b3bb220f3" Dec 12 18:15:54.394000 audit[5066]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=5066 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:54.394000 audit[5066]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcab817230 a2=0 a3=7ffcab81721c items=0 ppid=3256 pid=5066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:54.394000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:54.405000 audit[5066]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=5066 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:54.405000 audit[5066]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcab817230 a2=0 a3=0 items=0 ppid=3256 pid=5066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:54.405000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:54.435000 audit[5068]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=5068 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:54.435000 audit[5068]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd3465b520 a2=0 a3=7ffd3465b50c items=0 ppid=3256 pid=5068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:54.435000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:54.448000 audit[5068]: NETFILTER_CFG table=nat:132 family=2 entries=14 op=nft_register_rule pid=5068 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:54.448000 audit[5068]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd3465b520 a2=0 a3=0 items=0 ppid=3256 pid=5068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:54.448000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:54.870826 systemd-networkd[1745]: cali7410f41dadf: Gained IPv6LL Dec 12 18:15:55.255939 systemd-networkd[1745]: calib261a2a467b: Gained IPv6LL Dec 12 18:15:55.266281 containerd[1848]: time="2025-12-12T18:15:55.266206828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646c8584fc-mwbp6,Uid:fffa851a-7d3a-4af7-80b6-6f040212a19b,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:15:55.354984 systemd-networkd[1745]: cali01d1100df14: Link UP Dec 12 18:15:55.355546 systemd-networkd[1745]: cali01d1100df14: Gained carrier Dec 12 18:15:55.370154 containerd[1848]: 2025-12-12 18:15:55.300 [INFO][5075] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--mwbp6-eth0 calico-apiserver-646c8584fc- calico-apiserver fffa851a-7d3a-4af7-80b6-6f040212a19b 819 0 2025-12-12 18:15:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:646c8584fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515-1-0-e-14f87f00b0 calico-apiserver-646c8584fc-mwbp6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali01d1100df14 [] [] }} ContainerID="57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652" Namespace="calico-apiserver" Pod="calico-apiserver-646c8584fc-mwbp6" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--mwbp6-" Dec 12 18:15:55.370154 containerd[1848]: 2025-12-12 18:15:55.300 [INFO][5075] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652" Namespace="calico-apiserver" Pod="calico-apiserver-646c8584fc-mwbp6" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--mwbp6-eth0" Dec 12 18:15:55.370154 containerd[1848]: 2025-12-12 18:15:55.322 [INFO][5093] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652" HandleID="k8s-pod-network.57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652" Workload="ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--mwbp6-eth0" Dec 12 18:15:55.370415 containerd[1848]: 2025-12-12 18:15:55.322 [INFO][5093] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652" HandleID="k8s-pod-network.57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652" Workload="ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--mwbp6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000134830), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515-1-0-e-14f87f00b0", "pod":"calico-apiserver-646c8584fc-mwbp6", "timestamp":"2025-12-12 18:15:55.322695145 +0000 UTC"}, Hostname:"ci-4515-1-0-e-14f87f00b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:15:55.370415 containerd[1848]: 2025-12-12 18:15:55.322 [INFO][5093] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:15:55.370415 containerd[1848]: 2025-12-12 18:15:55.322 [INFO][5093] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:15:55.370415 containerd[1848]: 2025-12-12 18:15:55.322 [INFO][5093] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-e-14f87f00b0' Dec 12 18:15:55.370415 containerd[1848]: 2025-12-12 18:15:55.328 [INFO][5093] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:55.370415 containerd[1848]: 2025-12-12 18:15:55.332 [INFO][5093] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:55.370415 containerd[1848]: 2025-12-12 18:15:55.336 [INFO][5093] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:55.370415 containerd[1848]: 2025-12-12 18:15:55.338 [INFO][5093] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:55.370415 containerd[1848]: 2025-12-12 18:15:55.340 [INFO][5093] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:55.370638 containerd[1848]: 2025-12-12 18:15:55.340 [INFO][5093] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:55.370638 containerd[1848]: 2025-12-12 18:15:55.341 [INFO][5093] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652 Dec 12 18:15:55.370638 containerd[1848]: 2025-12-12 18:15:55.345 [INFO][5093] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:55.370638 containerd[1848]: 2025-12-12 18:15:55.351 [INFO][5093] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.134/26] block=192.168.15.128/26 handle="k8s-pod-network.57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:55.370638 containerd[1848]: 2025-12-12 18:15:55.351 [INFO][5093] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.134/26] handle="k8s-pod-network.57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:55.370638 containerd[1848]: 2025-12-12 18:15:55.351 [INFO][5093] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:15:55.370638 containerd[1848]: 2025-12-12 18:15:55.351 [INFO][5093] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.134/26] IPv6=[] ContainerID="57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652" HandleID="k8s-pod-network.57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652" Workload="ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--mwbp6-eth0" Dec 12 18:15:55.370820 containerd[1848]: 2025-12-12 18:15:55.352 [INFO][5075] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652" Namespace="calico-apiserver" Pod="calico-apiserver-646c8584fc-mwbp6" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--mwbp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--mwbp6-eth0", GenerateName:"calico-apiserver-646c8584fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"fffa851a-7d3a-4af7-80b6-6f040212a19b", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 15, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"646c8584fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-14f87f00b0", ContainerID:"", Pod:"calico-apiserver-646c8584fc-mwbp6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali01d1100df14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:15:55.370878 containerd[1848]: 2025-12-12 18:15:55.353 [INFO][5075] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.134/32] ContainerID="57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652" Namespace="calico-apiserver" Pod="calico-apiserver-646c8584fc-mwbp6" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--mwbp6-eth0" Dec 12 18:15:55.370878 containerd[1848]: 2025-12-12 18:15:55.353 [INFO][5075] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01d1100df14 ContainerID="57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652" Namespace="calico-apiserver" Pod="calico-apiserver-646c8584fc-mwbp6" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--mwbp6-eth0" Dec 12 18:15:55.370878 containerd[1848]: 2025-12-12 18:15:55.355 [INFO][5075] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652" Namespace="calico-apiserver" Pod="calico-apiserver-646c8584fc-mwbp6" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--mwbp6-eth0" Dec 12 18:15:55.370948 containerd[1848]: 2025-12-12 18:15:55.356 [INFO][5075] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652" Namespace="calico-apiserver" Pod="calico-apiserver-646c8584fc-mwbp6" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--mwbp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--mwbp6-eth0", GenerateName:"calico-apiserver-646c8584fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"fffa851a-7d3a-4af7-80b6-6f040212a19b", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 15, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"646c8584fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-14f87f00b0", ContainerID:"57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652", Pod:"calico-apiserver-646c8584fc-mwbp6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali01d1100df14", MAC:"d2:ac:ff:71:a4:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:15:55.371002 containerd[1848]: 2025-12-12 18:15:55.368 [INFO][5075] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652" Namespace="calico-apiserver" Pod="calico-apiserver-646c8584fc-mwbp6" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-calico--apiserver--646c8584fc--mwbp6-eth0" Dec 12 18:15:55.376846 kubelet[3114]: E1212 18:15:55.376771 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vrl2" podUID="a83fad8b-d566-4d95-b74e-3a16ee22e614" Dec 12 18:15:55.376846 kubelet[3114]: E1212 18:15:55.376827 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-2p5wd" podUID="d5affb4a-a5c2-4140-9517-3b93721ff225" Dec 12 18:15:55.386000 audit[5112]: NETFILTER_CFG table=filter:133 family=2 entries=59 op=nft_register_chain pid=5112 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:15:55.386000 audit[5112]: SYSCALL arch=c000003e syscall=46 success=yes exit=29492 a0=3 a1=7ffec23f97d0 a2=0 a3=7ffec23f97bc items=0 ppid=4456 pid=5112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:55.386000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:15:55.400626 containerd[1848]: time="2025-12-12T18:15:55.399797659Z" level=info msg="connecting to shim 57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652" address="unix:///run/containerd/s/2edb830929c1fe09a32ceb91d519ab4510be62d0e346dc5c158c2330d0f2529f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:15:55.426903 systemd[1]: Started cri-containerd-57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652.scope - libcontainer container 57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652. Dec 12 18:15:55.437000 audit: BPF prog-id=231 op=LOAD Dec 12 18:15:55.438000 audit: BPF prog-id=232 op=LOAD Dec 12 18:15:55.438000 audit[5133]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5121 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:55.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537623635326538633237353231666133363461616462303538333063 Dec 12 18:15:55.438000 audit: BPF prog-id=232 op=UNLOAD Dec 12 18:15:55.438000 audit[5133]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5121 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:55.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537623635326538633237353231666133363461616462303538333063 Dec 12 18:15:55.438000 audit: BPF prog-id=233 op=LOAD Dec 12 18:15:55.438000 audit[5133]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5121 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:55.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537623635326538633237353231666133363461616462303538333063 Dec 12 18:15:55.438000 audit: BPF prog-id=234 op=LOAD Dec 12 18:15:55.438000 audit[5133]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=5121 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:55.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537623635326538633237353231666133363461616462303538333063 Dec 12 18:15:55.438000 audit: BPF prog-id=234 op=UNLOAD Dec 12 18:15:55.438000 audit[5133]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5121 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:55.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537623635326538633237353231666133363461616462303538333063 Dec 12 18:15:55.438000 audit: BPF prog-id=233 op=UNLOAD Dec 12 18:15:55.438000 audit[5133]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5121 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:55.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537623635326538633237353231666133363461616462303538333063 Dec 12 18:15:55.438000 audit: BPF prog-id=235 op=LOAD Dec 12 18:15:55.438000 audit[5133]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=5121 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:55.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537623635326538633237353231666133363461616462303538333063 Dec 12 18:15:55.472657 containerd[1848]: time="2025-12-12T18:15:55.472579315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646c8584fc-mwbp6,Uid:fffa851a-7d3a-4af7-80b6-6f040212a19b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"57b652e8c27521fa364aadb05830cde970304961bef211220003e5107446b652\"" Dec 12 18:15:55.474001 containerd[1848]: time="2025-12-12T18:15:55.473970580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:15:55.857433 containerd[1848]: time="2025-12-12T18:15:55.857166915Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:15:55.862222 containerd[1848]: time="2025-12-12T18:15:55.862147490Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:15:55.862222 containerd[1848]: time="2025-12-12T18:15:55.862181125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:15:55.862422 kubelet[3114]: E1212 18:15:55.862381 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:15:55.862472 kubelet[3114]: E1212 18:15:55.862428 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:15:55.862623 kubelet[3114]: E1212 18:15:55.862584 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxfb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-646c8584fc-mwbp6_calico-apiserver(fffa851a-7d3a-4af7-80b6-6f040212a19b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:15:55.863778 kubelet[3114]: E1212 18:15:55.863742 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-mwbp6" podUID="fffa851a-7d3a-4af7-80b6-6f040212a19b" Dec 12 18:15:56.266518 containerd[1848]: time="2025-12-12T18:15:56.266460148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pt4l8,Uid:1fa5f3cd-fa74-4d8b-91cb-263e77629d8c,Namespace:kube-system,Attempt:0,}" Dec 12 18:15:56.266899 containerd[1848]: time="2025-12-12T18:15:56.266867407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zjmxw,Uid:fd160278-5916-4c1e-b8d8-6da7c31295f3,Namespace:kube-system,Attempt:0,}" Dec 12 18:15:56.367595 systemd-networkd[1745]: cali67e6addf42e: Link UP Dec 12 18:15:56.367860 systemd-networkd[1745]: cali67e6addf42e: Gained carrier Dec 12 18:15:56.379906 kubelet[3114]: E1212 18:15:56.379836 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-mwbp6" podUID="fffa851a-7d3a-4af7-80b6-6f040212a19b" Dec 12 18:15:56.381564 containerd[1848]: 2025-12-12 18:15:56.312 [INFO][5158] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--pt4l8-eth0 coredns-674b8bbfcf- kube-system 1fa5f3cd-fa74-4d8b-91cb-263e77629d8c 820 0 2025-12-12 18:15:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515-1-0-e-14f87f00b0 coredns-674b8bbfcf-pt4l8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali67e6addf42e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a" Namespace="kube-system" Pod="coredns-674b8bbfcf-pt4l8" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--pt4l8-" Dec 12 18:15:56.381564 containerd[1848]: 2025-12-12 18:15:56.312 [INFO][5158] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a" Namespace="kube-system" Pod="coredns-674b8bbfcf-pt4l8" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--pt4l8-eth0" Dec 12 18:15:56.381564 containerd[1848]: 2025-12-12 18:15:56.332 [INFO][5195] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a" HandleID="k8s-pod-network.293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a" Workload="ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--pt4l8-eth0" Dec 12 18:15:56.381731 containerd[1848]: 2025-12-12 18:15:56.333 [INFO][5195] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a" HandleID="k8s-pod-network.293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a" Workload="ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--pt4l8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000616ae0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515-1-0-e-14f87f00b0", "pod":"coredns-674b8bbfcf-pt4l8", "timestamp":"2025-12-12 18:15:56.332975313 +0000 UTC"}, Hostname:"ci-4515-1-0-e-14f87f00b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:15:56.381731 containerd[1848]: 2025-12-12 18:15:56.333 [INFO][5195] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:15:56.381731 containerd[1848]: 2025-12-12 18:15:56.333 [INFO][5195] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:15:56.381731 containerd[1848]: 2025-12-12 18:15:56.333 [INFO][5195] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-e-14f87f00b0' Dec 12 18:15:56.381731 containerd[1848]: 2025-12-12 18:15:56.338 [INFO][5195] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:56.381731 containerd[1848]: 2025-12-12 18:15:56.343 [INFO][5195] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:56.381731 containerd[1848]: 2025-12-12 18:15:56.346 [INFO][5195] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:56.381731 containerd[1848]: 2025-12-12 18:15:56.348 [INFO][5195] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:56.381731 containerd[1848]: 2025-12-12 18:15:56.349 [INFO][5195] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:56.381920 containerd[1848]: 2025-12-12 18:15:56.349 [INFO][5195] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:56.381920 containerd[1848]: 2025-12-12 18:15:56.350 [INFO][5195] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a Dec 12 18:15:56.381920 containerd[1848]: 2025-12-12 18:15:56.354 [INFO][5195] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:56.381920 containerd[1848]: 2025-12-12 18:15:56.360 [INFO][5195] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.135/26] block=192.168.15.128/26 handle="k8s-pod-network.293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:56.381920 containerd[1848]: 2025-12-12 18:15:56.360 [INFO][5195] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.135/26] handle="k8s-pod-network.293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:56.381920 containerd[1848]: 2025-12-12 18:15:56.360 [INFO][5195] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:15:56.381920 containerd[1848]: 2025-12-12 18:15:56.360 [INFO][5195] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.135/26] IPv6=[] ContainerID="293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a" HandleID="k8s-pod-network.293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a" Workload="ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--pt4l8-eth0" Dec 12 18:15:56.382048 containerd[1848]: 2025-12-12 18:15:56.362 [INFO][5158] cni-plugin/k8s.go 418: Populated endpoint ContainerID="293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a" Namespace="kube-system" Pod="coredns-674b8bbfcf-pt4l8" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--pt4l8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--pt4l8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1fa5f3cd-fa74-4d8b-91cb-263e77629d8c", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 15, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-14f87f00b0", ContainerID:"", Pod:"coredns-674b8bbfcf-pt4l8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67e6addf42e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:15:56.382048 containerd[1848]: 2025-12-12 18:15:56.362 [INFO][5158] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.135/32] ContainerID="293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a" Namespace="kube-system" Pod="coredns-674b8bbfcf-pt4l8" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--pt4l8-eth0" Dec 12 18:15:56.382048 containerd[1848]: 2025-12-12 18:15:56.362 [INFO][5158] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67e6addf42e ContainerID="293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a" Namespace="kube-system" Pod="coredns-674b8bbfcf-pt4l8" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--pt4l8-eth0" Dec 12 18:15:56.382048 containerd[1848]: 2025-12-12 18:15:56.367 [INFO][5158] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a" Namespace="kube-system" Pod="coredns-674b8bbfcf-pt4l8" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--pt4l8-eth0" Dec 12 18:15:56.382048 containerd[1848]: 2025-12-12 18:15:56.368 [INFO][5158] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a" Namespace="kube-system" Pod="coredns-674b8bbfcf-pt4l8" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--pt4l8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--pt4l8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1fa5f3cd-fa74-4d8b-91cb-263e77629d8c", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 15, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-14f87f00b0", ContainerID:"293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a", Pod:"coredns-674b8bbfcf-pt4l8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67e6addf42e", MAC:"be:92:08:8b:64:92", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:15:56.382048 containerd[1848]: 2025-12-12 18:15:56.379 [INFO][5158] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a" Namespace="kube-system" Pod="coredns-674b8bbfcf-pt4l8" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--pt4l8-eth0" Dec 12 18:15:56.397000 audit[5228]: NETFILTER_CFG table=filter:134 family=2 entries=58 op=nft_register_chain pid=5228 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:15:56.397000 audit[5228]: SYSCALL arch=c000003e syscall=46 success=yes exit=27288 a0=3 a1=7ffdf3de73a0 a2=0 a3=7ffdf3de738c items=0 ppid=4456 pid=5228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.397000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:15:56.410289 containerd[1848]: time="2025-12-12T18:15:56.410236985Z" level=info msg="connecting to shim 293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a" address="unix:///run/containerd/s/7b71e1b408631500d09f903ff8fa43e07335be5dbdce4e07afd9ba2c19985a40" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:15:56.412000 audit[5239]: NETFILTER_CFG table=filter:135 family=2 entries=20 op=nft_register_rule pid=5239 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:56.412000 audit[5239]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd1b64cd20 a2=0 a3=7ffd1b64cd0c items=0 ppid=3256 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.412000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:56.426000 audit[5239]: NETFILTER_CFG table=nat:136 family=2 entries=14 op=nft_register_rule pid=5239 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:56.426000 audit[5239]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd1b64cd20 a2=0 a3=0 items=0 ppid=3256 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.426000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:56.434893 systemd[1]: Started cri-containerd-293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a.scope - libcontainer container 293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a. Dec 12 18:15:56.444000 audit: BPF prog-id=236 op=LOAD Dec 12 18:15:56.444000 audit: BPF prog-id=237 op=LOAD Dec 12 18:15:56.444000 audit[5249]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5238 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239333935326263343735663263616337643234363336626634346365 Dec 12 18:15:56.444000 audit: BPF prog-id=237 op=UNLOAD Dec 12 18:15:56.444000 audit[5249]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5238 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239333935326263343735663263616337643234363336626634346365 Dec 12 18:15:56.444000 audit: BPF prog-id=238 op=LOAD Dec 12 18:15:56.444000 audit[5249]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5238 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239333935326263343735663263616337643234363336626634346365 Dec 12 18:15:56.444000 audit: BPF prog-id=239 op=LOAD Dec 12 18:15:56.444000 audit[5249]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=5238 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239333935326263343735663263616337643234363336626634346365 Dec 12 18:15:56.445000 audit: BPF prog-id=239 op=UNLOAD Dec 12 18:15:56.445000 audit[5249]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5238 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239333935326263343735663263616337643234363336626634346365 Dec 12 18:15:56.445000 audit: BPF prog-id=238 op=UNLOAD Dec 12 18:15:56.445000 audit[5249]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5238 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239333935326263343735663263616337643234363336626634346365 Dec 12 18:15:56.445000 audit: BPF prog-id=240 op=LOAD Dec 12 18:15:56.445000 audit[5249]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=5238 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239333935326263343735663263616337643234363336626634346365 Dec 12 18:15:56.467473 systemd-networkd[1745]: cali4f0b779602f: Link UP Dec 12 18:15:56.468298 systemd-networkd[1745]: cali4f0b779602f: Gained carrier Dec 12 18:15:56.479427 containerd[1848]: 2025-12-12 18:15:56.313 [INFO][5165] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--zjmxw-eth0 coredns-674b8bbfcf- kube-system fd160278-5916-4c1e-b8d8-6da7c31295f3 815 0 2025-12-12 18:15:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515-1-0-e-14f87f00b0 coredns-674b8bbfcf-zjmxw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4f0b779602f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6" Namespace="kube-system" Pod="coredns-674b8bbfcf-zjmxw" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--zjmxw-" Dec 12 18:15:56.479427 containerd[1848]: 2025-12-12 18:15:56.313 [INFO][5165] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6" Namespace="kube-system" Pod="coredns-674b8bbfcf-zjmxw" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--zjmxw-eth0" Dec 12 18:15:56.479427 containerd[1848]: 2025-12-12 18:15:56.332 [INFO][5197] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6" HandleID="k8s-pod-network.b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6" Workload="ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--zjmxw-eth0" Dec 12 18:15:56.479427 containerd[1848]: 2025-12-12 18:15:56.333 [INFO][5197] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6" HandleID="k8s-pod-network.b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6" Workload="ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--zjmxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5310), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515-1-0-e-14f87f00b0", "pod":"coredns-674b8bbfcf-zjmxw", "timestamp":"2025-12-12 18:15:56.332974118 +0000 UTC"}, Hostname:"ci-4515-1-0-e-14f87f00b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:15:56.479427 containerd[1848]: 2025-12-12 18:15:56.333 [INFO][5197] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:15:56.479427 containerd[1848]: 2025-12-12 18:15:56.360 [INFO][5197] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:15:56.479427 containerd[1848]: 2025-12-12 18:15:56.361 [INFO][5197] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-e-14f87f00b0' Dec 12 18:15:56.479427 containerd[1848]: 2025-12-12 18:15:56.439 [INFO][5197] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:56.479427 containerd[1848]: 2025-12-12 18:15:56.443 [INFO][5197] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:56.479427 containerd[1848]: 2025-12-12 18:15:56.447 [INFO][5197] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:56.479427 containerd[1848]: 2025-12-12 18:15:56.448 [INFO][5197] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:56.479427 containerd[1848]: 2025-12-12 18:15:56.450 [INFO][5197] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:56.479427 containerd[1848]: 2025-12-12 18:15:56.450 [INFO][5197] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:56.479427 containerd[1848]: 2025-12-12 18:15:56.451 [INFO][5197] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6 Dec 12 18:15:56.479427 containerd[1848]: 2025-12-12 18:15:56.455 [INFO][5197] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:56.479427 containerd[1848]: 2025-12-12 18:15:56.462 [INFO][5197] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.136/26] block=192.168.15.128/26 handle="k8s-pod-network.b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:56.479427 containerd[1848]: 2025-12-12 18:15:56.462 [INFO][5197] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.136/26] handle="k8s-pod-network.b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6" host="ci-4515-1-0-e-14f87f00b0" Dec 12 18:15:56.479427 containerd[1848]: 2025-12-12 18:15:56.462 [INFO][5197] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:15:56.479427 containerd[1848]: 2025-12-12 18:15:56.463 [INFO][5197] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.136/26] IPv6=[] ContainerID="b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6" HandleID="k8s-pod-network.b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6" Workload="ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--zjmxw-eth0" Dec 12 18:15:56.479954 containerd[1848]: 2025-12-12 18:15:56.464 [INFO][5165] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6" Namespace="kube-system" Pod="coredns-674b8bbfcf-zjmxw" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--zjmxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--zjmxw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fd160278-5916-4c1e-b8d8-6da7c31295f3", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 15, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-14f87f00b0", ContainerID:"", Pod:"coredns-674b8bbfcf-zjmxw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4f0b779602f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:15:56.479954 containerd[1848]: 2025-12-12 18:15:56.464 [INFO][5165] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.136/32] ContainerID="b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6" Namespace="kube-system" Pod="coredns-674b8bbfcf-zjmxw" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--zjmxw-eth0" Dec 12 18:15:56.479954 containerd[1848]: 2025-12-12 18:15:56.464 [INFO][5165] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f0b779602f ContainerID="b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6" Namespace="kube-system" Pod="coredns-674b8bbfcf-zjmxw" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--zjmxw-eth0" Dec 12 18:15:56.479954 containerd[1848]: 2025-12-12 18:15:56.468 [INFO][5165] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6" Namespace="kube-system" Pod="coredns-674b8bbfcf-zjmxw" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--zjmxw-eth0" Dec 12 18:15:56.479954 containerd[1848]: 2025-12-12 18:15:56.469 [INFO][5165] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6" Namespace="kube-system" Pod="coredns-674b8bbfcf-zjmxw" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--zjmxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--zjmxw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fd160278-5916-4c1e-b8d8-6da7c31295f3", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 15, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-14f87f00b0", ContainerID:"b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6", Pod:"coredns-674b8bbfcf-zjmxw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4f0b779602f", MAC:"76:9b:4b:8e:7d:d1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:15:56.479954 containerd[1848]: 2025-12-12 18:15:56.478 [INFO][5165] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6" Namespace="kube-system" Pod="coredns-674b8bbfcf-zjmxw" WorkloadEndpoint="ci--4515--1--0--e--14f87f00b0-k8s-coredns--674b8bbfcf--zjmxw-eth0" Dec 12 18:15:56.481155 containerd[1848]: time="2025-12-12T18:15:56.481122821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pt4l8,Uid:1fa5f3cd-fa74-4d8b-91cb-263e77629d8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a\"" Dec 12 18:15:56.486889 containerd[1848]: time="2025-12-12T18:15:56.486851975Z" level=info msg="CreateContainer within sandbox \"293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:15:56.491000 audit[5283]: NETFILTER_CFG table=filter:137 family=2 entries=52 op=nft_register_chain pid=5283 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:15:56.491000 audit[5283]: SYSCALL arch=c000003e syscall=46 success=yes exit=23892 a0=3 a1=7fff4afd33c0 a2=0 a3=7fff4afd33ac items=0 ppid=4456 pid=5283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.491000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:15:56.503474 containerd[1848]: time="2025-12-12T18:15:56.503425679Z" level=info msg="Container 8b492a85d012c6814c284f027303752beec758e0e787c5d5f48a07906908f05f: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:15:56.511497 containerd[1848]: time="2025-12-12T18:15:56.511440848Z" level=info msg="connecting to shim b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6" address="unix:///run/containerd/s/8f52d8794cf5376fba019919d40e1b13016d4b29c43bfe4c86a4086fc07acfea" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:15:56.511610 containerd[1848]: time="2025-12-12T18:15:56.511577815Z" level=info msg="CreateContainer within sandbox \"293952bc475f2cac7d24636bf44cee6203238c3c28e168841ffdd05bd85a2d7a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8b492a85d012c6814c284f027303752beec758e0e787c5d5f48a07906908f05f\"" Dec 12 18:15:56.512470 containerd[1848]: time="2025-12-12T18:15:56.512430616Z" level=info msg="StartContainer for \"8b492a85d012c6814c284f027303752beec758e0e787c5d5f48a07906908f05f\"" Dec 12 18:15:56.513176 containerd[1848]: time="2025-12-12T18:15:56.513154266Z" level=info msg="connecting to shim 8b492a85d012c6814c284f027303752beec758e0e787c5d5f48a07906908f05f" address="unix:///run/containerd/s/7b71e1b408631500d09f903ff8fa43e07335be5dbdce4e07afd9ba2c19985a40" protocol=ttrpc version=3 Dec 12 18:15:56.536917 systemd[1]: Started cri-containerd-8b492a85d012c6814c284f027303752beec758e0e787c5d5f48a07906908f05f.scope - libcontainer container 8b492a85d012c6814c284f027303752beec758e0e787c5d5f48a07906908f05f. Dec 12 18:15:56.539728 systemd[1]: Started cri-containerd-b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6.scope - libcontainer container b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6. Dec 12 18:15:56.548000 audit: BPF prog-id=241 op=LOAD Dec 12 18:15:56.549000 audit: BPF prog-id=242 op=LOAD Dec 12 18:15:56.549000 audit[5298]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5238 pid=5298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862343932613835643031326336383134633238346630323733303337 Dec 12 18:15:56.549000 audit: BPF prog-id=242 op=UNLOAD Dec 12 18:15:56.549000 audit[5298]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5238 pid=5298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862343932613835643031326336383134633238346630323733303337 Dec 12 18:15:56.549000 audit: BPF prog-id=243 op=LOAD Dec 12 18:15:56.549000 audit[5298]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5238 pid=5298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862343932613835643031326336383134633238346630323733303337 Dec 12 18:15:56.549000 audit: BPF prog-id=244 op=LOAD Dec 12 18:15:56.549000 audit: BPF prog-id=245 op=LOAD Dec 12 18:15:56.549000 audit[5298]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=5238 pid=5298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862343932613835643031326336383134633238346630323733303337 Dec 12 18:15:56.549000 audit: BPF prog-id=245 op=UNLOAD Dec 12 18:15:56.549000 audit[5298]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5238 pid=5298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862343932613835643031326336383134633238346630323733303337 Dec 12 18:15:56.549000 audit: BPF prog-id=243 op=UNLOAD Dec 12 18:15:56.549000 audit[5298]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5238 pid=5298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862343932613835643031326336383134633238346630323733303337 Dec 12 18:15:56.550000 audit: BPF prog-id=246 op=LOAD Dec 12 18:15:56.550000 audit[5298]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=5238 pid=5298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862343932613835643031326336383134633238346630323733303337 Dec 12 18:15:56.551000 audit: BPF prog-id=247 op=LOAD Dec 12 18:15:56.551000 audit[5309]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5292 pid=5309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230393437346437343862343932303532353863653534643763663830 Dec 12 18:15:56.551000 audit: BPF prog-id=247 op=UNLOAD Dec 12 18:15:56.551000 audit[5309]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5292 pid=5309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230393437346437343862343932303532353863653534643763663830 Dec 12 18:15:56.551000 audit: BPF prog-id=248 op=LOAD Dec 12 18:15:56.551000 audit[5309]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5292 pid=5309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230393437346437343862343932303532353863653534643763663830 Dec 12 18:15:56.551000 audit: BPF prog-id=249 op=LOAD Dec 12 18:15:56.551000 audit[5309]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=5292 pid=5309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230393437346437343862343932303532353863653534643763663830 Dec 12 18:15:56.551000 audit: BPF prog-id=249 op=UNLOAD Dec 12 18:15:56.551000 audit[5309]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5292 pid=5309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230393437346437343862343932303532353863653534643763663830 Dec 12 18:15:56.551000 audit: BPF prog-id=248 op=UNLOAD Dec 12 18:15:56.551000 audit[5309]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5292 pid=5309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230393437346437343862343932303532353863653534643763663830 Dec 12 18:15:56.551000 audit: BPF prog-id=250 op=LOAD Dec 12 18:15:56.551000 audit[5309]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=5292 pid=5309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230393437346437343862343932303532353863653534643763663830 Dec 12 18:15:56.568220 containerd[1848]: time="2025-12-12T18:15:56.568168974Z" level=info msg="StartContainer for \"8b492a85d012c6814c284f027303752beec758e0e787c5d5f48a07906908f05f\" returns successfully" Dec 12 18:15:56.594164 containerd[1848]: time="2025-12-12T18:15:56.594125320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zjmxw,Uid:fd160278-5916-4c1e-b8d8-6da7c31295f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6\"" Dec 12 18:15:56.600725 containerd[1848]: time="2025-12-12T18:15:56.600570945Z" level=info msg="CreateContainer within sandbox \"b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:15:56.618525 containerd[1848]: time="2025-12-12T18:15:56.618454469Z" level=info msg="Container d03ee5695610c132a2314e507141d022b3a06ca2d53c85967f28ba9d67836427: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:15:56.629180 containerd[1848]: time="2025-12-12T18:15:56.629120775Z" level=info msg="CreateContainer within sandbox \"b09474d748b49205258ce54d7cf809275d963a33315ed4161d7b079c0ea498f6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d03ee5695610c132a2314e507141d022b3a06ca2d53c85967f28ba9d67836427\"" Dec 12 18:15:56.629646 containerd[1848]: time="2025-12-12T18:15:56.629618139Z" level=info msg="StartContainer for \"d03ee5695610c132a2314e507141d022b3a06ca2d53c85967f28ba9d67836427\"" Dec 12 18:15:56.630446 containerd[1848]: time="2025-12-12T18:15:56.630416324Z" level=info msg="connecting to shim d03ee5695610c132a2314e507141d022b3a06ca2d53c85967f28ba9d67836427" address="unix:///run/containerd/s/8f52d8794cf5376fba019919d40e1b13016d4b29c43bfe4c86a4086fc07acfea" protocol=ttrpc version=3 Dec 12 18:15:56.659958 systemd[1]: Started cri-containerd-d03ee5695610c132a2314e507141d022b3a06ca2d53c85967f28ba9d67836427.scope - libcontainer container d03ee5695610c132a2314e507141d022b3a06ca2d53c85967f28ba9d67836427. Dec 12 18:15:56.669000 audit: BPF prog-id=251 op=LOAD Dec 12 18:15:56.669000 audit: BPF prog-id=252 op=LOAD Dec 12 18:15:56.669000 audit[5366]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5292 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.669000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430336565353639353631306331333261323331346535303731343164 Dec 12 18:15:56.669000 audit: BPF prog-id=252 op=UNLOAD Dec 12 18:15:56.669000 audit[5366]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5292 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.669000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430336565353639353631306331333261323331346535303731343164 Dec 12 18:15:56.670000 audit: BPF prog-id=253 op=LOAD Dec 12 18:15:56.670000 audit[5366]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5292 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430336565353639353631306331333261323331346535303731343164 Dec 12 18:15:56.670000 audit: BPF prog-id=254 op=LOAD Dec 12 18:15:56.670000 audit[5366]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=5292 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430336565353639353631306331333261323331346535303731343164 Dec 12 18:15:56.670000 audit: BPF prog-id=254 op=UNLOAD Dec 12 18:15:56.670000 audit[5366]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5292 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430336565353639353631306331333261323331346535303731343164 Dec 12 18:15:56.670000 audit: BPF prog-id=253 op=UNLOAD Dec 12 18:15:56.670000 audit[5366]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5292 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430336565353639353631306331333261323331346535303731343164 Dec 12 18:15:56.670000 audit: BPF prog-id=255 op=LOAD Dec 12 18:15:56.670000 audit[5366]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=5292 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430336565353639353631306331333261323331346535303731343164 Dec 12 18:15:56.690079 containerd[1848]: time="2025-12-12T18:15:56.690018360Z" level=info msg="StartContainer for \"d03ee5695610c132a2314e507141d022b3a06ca2d53c85967f28ba9d67836427\" returns successfully" Dec 12 18:15:56.790870 systemd-networkd[1745]: cali01d1100df14: Gained IPv6LL Dec 12 18:15:57.384323 kubelet[3114]: E1212 18:15:57.384192 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-mwbp6" podUID="fffa851a-7d3a-4af7-80b6-6f040212a19b" Dec 12 18:15:57.393848 kubelet[3114]: I1212 18:15:57.393790 3114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zjmxw" podStartSLOduration=45.393772499 podStartE2EDuration="45.393772499s" podCreationTimestamp="2025-12-12 18:15:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:15:57.392930401 +0000 UTC m=+51.205453540" watchObservedRunningTime="2025-12-12 18:15:57.393772499 +0000 UTC m=+51.206295658" Dec 12 18:15:57.406000 audit[5404]: NETFILTER_CFG table=filter:138 family=2 entries=20 op=nft_register_rule pid=5404 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:57.406000 audit[5404]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcf331f630 a2=0 a3=7ffcf331f61c items=0 ppid=3256 pid=5404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:57.406000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:57.414955 kubelet[3114]: I1212 18:15:57.414895 3114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pt4l8" podStartSLOduration=45.414867834 podStartE2EDuration="45.414867834s" podCreationTimestamp="2025-12-12 18:15:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:15:57.414348947 +0000 UTC m=+51.226872083" watchObservedRunningTime="2025-12-12 18:15:57.414867834 +0000 UTC m=+51.227390970" Dec 12 18:15:57.416000 audit[5404]: NETFILTER_CFG table=nat:139 family=2 entries=14 op=nft_register_rule pid=5404 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:57.416000 audit[5404]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcf331f630 a2=0 a3=0 items=0 ppid=3256 pid=5404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:57.416000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:57.446000 audit[5406]: NETFILTER_CFG table=filter:140 family=2 entries=17 op=nft_register_rule pid=5406 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:57.446000 audit[5406]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd30099e80 a2=0 a3=7ffd30099e6c items=0 ppid=3256 pid=5406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:57.446000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:57.472000 audit[5406]: NETFILTER_CFG table=nat:141 family=2 entries=35 op=nft_register_chain pid=5406 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:57.472000 audit[5406]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd30099e80 a2=0 a3=7ffd30099e6c items=0 ppid=3256 pid=5406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:57.472000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:57.686937 systemd-networkd[1745]: cali4f0b779602f: Gained IPv6LL Dec 12 18:15:58.198952 systemd-networkd[1745]: cali67e6addf42e: Gained IPv6LL Dec 12 18:15:58.414000 audit[5408]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=5408 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:58.416073 kernel: kauditd_printk_skb: 189 callbacks suppressed Dec 12 18:15:58.416125 kernel: audit: type=1325 audit(1765563358.414:743): table=filter:142 family=2 entries=14 op=nft_register_rule pid=5408 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:58.414000 audit[5408]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe2835dec0 a2=0 a3=7ffe2835deac items=0 ppid=3256 pid=5408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:58.419158 kernel: audit: type=1300 audit(1765563358.414:743): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe2835dec0 a2=0 a3=7ffe2835deac items=0 ppid=3256 pid=5408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:58.414000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:58.423065 kernel: audit: type=1327 audit(1765563358.414:743): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:58.435000 audit[5408]: NETFILTER_CFG table=nat:143 family=2 entries=56 op=nft_register_chain pid=5408 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:58.435000 audit[5408]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe2835dec0 a2=0 a3=7ffe2835deac items=0 ppid=3256 pid=5408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:58.440205 kernel: audit: type=1325 audit(1765563358.435:744): table=nat:143 family=2 entries=56 op=nft_register_chain pid=5408 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:58.440254 kernel: audit: type=1300 audit(1765563358.435:744): arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe2835dec0 a2=0 a3=7ffe2835deac items=0 ppid=3256 pid=5408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:58.435000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:58.444232 kernel: audit: type=1327 audit(1765563358.435:744): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:16:05.266939 containerd[1848]: time="2025-12-12T18:16:05.266871662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:16:05.597121 containerd[1848]: time="2025-12-12T18:16:05.596902412Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:16:05.599454 containerd[1848]: time="2025-12-12T18:16:05.599404780Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:16:05.599532 containerd[1848]: time="2025-12-12T18:16:05.599503040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:16:05.599774 kubelet[3114]: E1212 18:16:05.599711 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:16:05.600100 kubelet[3114]: E1212 18:16:05.599788 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:16:05.600100 kubelet[3114]: E1212 18:16:05.599983 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2cfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5447cc8774-bt5vn_calico-system(e457a45d-7eaa-42e2-95fa-b7011451de77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:16:05.601197 kubelet[3114]: E1212 18:16:05.601164 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5447cc8774-bt5vn" podUID="e457a45d-7eaa-42e2-95fa-b7011451de77" Dec 12 18:16:06.266767 containerd[1848]: time="2025-12-12T18:16:06.266703417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:16:06.813446 containerd[1848]: time="2025-12-12T18:16:06.813376049Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:16:06.816391 containerd[1848]: time="2025-12-12T18:16:06.816274012Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:16:06.816391 containerd[1848]: time="2025-12-12T18:16:06.816361118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:16:06.816680 kubelet[3114]: E1212 18:16:06.816599 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:16:06.816998 kubelet[3114]: E1212 18:16:06.816682 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:16:06.816998 kubelet[3114]: E1212 18:16:06.816821 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:32f7d598b8bd491099e853ab2d7a3772,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-svwkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cbd858cfd-bgn9w_calico-system(dc52e510-0fd6-4a21-9182-a1df4018bab8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:16:06.819710 containerd[1848]: time="2025-12-12T18:16:06.819645040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:16:07.155484 containerd[1848]: time="2025-12-12T18:16:07.155414387Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:16:07.157209 containerd[1848]: time="2025-12-12T18:16:07.157157780Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:16:07.157268 containerd[1848]: time="2025-12-12T18:16:07.157207131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:16:07.157474 kubelet[3114]: E1212 18:16:07.157410 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:16:07.157474 kubelet[3114]: E1212 18:16:07.157460 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:16:07.157612 kubelet[3114]: E1212 18:16:07.157580 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-svwkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cbd858cfd-bgn9w_calico-system(dc52e510-0fd6-4a21-9182-a1df4018bab8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:16:07.158781 kubelet[3114]: E1212 18:16:07.158740 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7cbd858cfd-bgn9w" podUID="dc52e510-0fd6-4a21-9182-a1df4018bab8" Dec 12 18:16:07.265968 containerd[1848]: time="2025-12-12T18:16:07.265928731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:16:07.615956 containerd[1848]: time="2025-12-12T18:16:07.615823141Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:16:07.617928 containerd[1848]: time="2025-12-12T18:16:07.617883174Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:16:07.618012 containerd[1848]: time="2025-12-12T18:16:07.617919497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:16:07.618107 kubelet[3114]: E1212 18:16:07.618068 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:16:07.618148 kubelet[3114]: E1212 18:16:07.618112 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:16:07.618267 kubelet[3114]: E1212 18:16:07.618232 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nsbhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9klxk_calico-system(4cc6cae7-6092-4840-b2c3-065b3bb220f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:16:07.620034 containerd[1848]: time="2025-12-12T18:16:07.620010045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:16:07.941119 containerd[1848]: time="2025-12-12T18:16:07.941061646Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:16:07.944436 containerd[1848]: time="2025-12-12T18:16:07.944339502Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:16:07.944562 containerd[1848]: time="2025-12-12T18:16:07.944429483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:16:07.944703 kubelet[3114]: E1212 18:16:07.944648 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:16:07.944986 kubelet[3114]: E1212 18:16:07.944715 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:16:07.944986 kubelet[3114]: E1212 18:16:07.944852 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nsbhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9klxk_calico-system(4cc6cae7-6092-4840-b2c3-065b3bb220f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:16:07.946081 kubelet[3114]: E1212 18:16:07.946041 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9klxk" podUID="4cc6cae7-6092-4840-b2c3-065b3bb220f3" Dec 12 18:16:08.266788 containerd[1848]: time="2025-12-12T18:16:08.266476403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:16:08.600479 containerd[1848]: time="2025-12-12T18:16:08.600241187Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:16:08.602960 containerd[1848]: time="2025-12-12T18:16:08.602914748Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:16:08.603052 containerd[1848]: time="2025-12-12T18:16:08.603013399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:16:08.603369 kubelet[3114]: E1212 18:16:08.603317 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:16:08.603426 kubelet[3114]: E1212 18:16:08.603392 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:16:08.603924 kubelet[3114]: E1212 18:16:08.603700 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wxhcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4vrl2_calico-system(a83fad8b-d566-4d95-b74e-3a16ee22e614): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:16:08.605030 kubelet[3114]: E1212 18:16:08.604983 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vrl2" podUID="a83fad8b-d566-4d95-b74e-3a16ee22e614" Dec 12 18:16:10.265755 containerd[1848]: time="2025-12-12T18:16:10.265693495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:16:10.607414 containerd[1848]: time="2025-12-12T18:16:10.607272287Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:16:10.608967 containerd[1848]: time="2025-12-12T18:16:10.608922197Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:16:10.609041 containerd[1848]: time="2025-12-12T18:16:10.608958666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:16:10.609148 kubelet[3114]: E1212 18:16:10.609111 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:16:10.609443 kubelet[3114]: E1212 18:16:10.609157 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:16:10.609443 kubelet[3114]: E1212 18:16:10.609290 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8hc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-646c8584fc-2p5wd_calico-apiserver(d5affb4a-a5c2-4140-9517-3b93721ff225): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:16:10.610513 kubelet[3114]: E1212 18:16:10.610474 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-2p5wd" podUID="d5affb4a-a5c2-4140-9517-3b93721ff225" Dec 12 18:16:11.266921 containerd[1848]: time="2025-12-12T18:16:11.266650809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:16:11.806029 containerd[1848]: time="2025-12-12T18:16:11.805939610Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:16:11.808219 containerd[1848]: time="2025-12-12T18:16:11.808154906Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:16:11.808219 containerd[1848]: time="2025-12-12T18:16:11.808201690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:16:11.808433 kubelet[3114]: E1212 18:16:11.808379 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:16:11.808729 kubelet[3114]: E1212 18:16:11.808437 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:16:11.808729 kubelet[3114]: E1212 18:16:11.808578 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxfb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-646c8584fc-mwbp6_calico-apiserver(fffa851a-7d3a-4af7-80b6-6f040212a19b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:16:11.809797 kubelet[3114]: E1212 18:16:11.809760 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-mwbp6" podUID="fffa851a-7d3a-4af7-80b6-6f040212a19b" Dec 12 18:16:17.265432 kubelet[3114]: E1212 18:16:17.265373 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5447cc8774-bt5vn" podUID="e457a45d-7eaa-42e2-95fa-b7011451de77" Dec 12 18:16:20.266534 kubelet[3114]: E1212 18:16:20.266467 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9klxk" podUID="4cc6cae7-6092-4840-b2c3-065b3bb220f3" Dec 12 18:16:21.266988 kubelet[3114]: E1212 18:16:21.266917 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7cbd858cfd-bgn9w" podUID="dc52e510-0fd6-4a21-9182-a1df4018bab8" Dec 12 18:16:23.266095 kubelet[3114]: E1212 18:16:23.266021 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vrl2" podUID="a83fad8b-d566-4d95-b74e-3a16ee22e614" Dec 12 18:16:25.265876 kubelet[3114]: E1212 18:16:25.265809 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-2p5wd" podUID="d5affb4a-a5c2-4140-9517-3b93721ff225" Dec 12 18:16:26.266646 kubelet[3114]: E1212 18:16:26.266593 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-mwbp6" podUID="fffa851a-7d3a-4af7-80b6-6f040212a19b" Dec 12 18:16:30.266467 containerd[1848]: time="2025-12-12T18:16:30.266414014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:16:30.623185 containerd[1848]: time="2025-12-12T18:16:30.622865664Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:16:30.624783 containerd[1848]: time="2025-12-12T18:16:30.624735496Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:16:30.624851 containerd[1848]: time="2025-12-12T18:16:30.624800359Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:16:30.625008 kubelet[3114]: E1212 18:16:30.624966 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:16:30.625281 kubelet[3114]: E1212 18:16:30.625026 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:16:30.625281 kubelet[3114]: E1212 18:16:30.625174 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2cfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5447cc8774-bt5vn_calico-system(e457a45d-7eaa-42e2-95fa-b7011451de77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:16:30.626419 kubelet[3114]: E1212 18:16:30.626356 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5447cc8774-bt5vn" podUID="e457a45d-7eaa-42e2-95fa-b7011451de77" Dec 12 18:16:35.266190 containerd[1848]: time="2025-12-12T18:16:35.266112012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:16:35.620655 containerd[1848]: time="2025-12-12T18:16:35.620600878Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:16:35.622502 containerd[1848]: time="2025-12-12T18:16:35.622436213Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:16:35.622757 containerd[1848]: time="2025-12-12T18:16:35.622487128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:16:35.622891 kubelet[3114]: E1212 18:16:35.622815 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:16:35.622891 kubelet[3114]: E1212 18:16:35.622886 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:16:35.623277 kubelet[3114]: E1212 18:16:35.623118 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:32f7d598b8bd491099e853ab2d7a3772,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-svwkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cbd858cfd-bgn9w_calico-system(dc52e510-0fd6-4a21-9182-a1df4018bab8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:16:35.623370 containerd[1848]: time="2025-12-12T18:16:35.623212331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:16:35.977773 containerd[1848]: time="2025-12-12T18:16:35.977566602Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:16:35.979795 containerd[1848]: time="2025-12-12T18:16:35.979720211Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:16:35.979795 containerd[1848]: time="2025-12-12T18:16:35.979765628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:16:35.979999 kubelet[3114]: E1212 18:16:35.979948 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:16:35.979999 kubelet[3114]: E1212 18:16:35.979998 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:16:35.980349 containerd[1848]: time="2025-12-12T18:16:35.980325764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:16:35.980433 kubelet[3114]: E1212 18:16:35.980344 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nsbhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9klxk_calico-system(4cc6cae7-6092-4840-b2c3-065b3bb220f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:16:36.312261 containerd[1848]: time="2025-12-12T18:16:36.312136509Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:16:36.314154 containerd[1848]: time="2025-12-12T18:16:36.314113811Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:16:36.314294 containerd[1848]: time="2025-12-12T18:16:36.314189736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:16:36.314477 kubelet[3114]: E1212 18:16:36.314443 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:16:36.314525 kubelet[3114]: E1212 18:16:36.314491 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:16:36.314757 kubelet[3114]: E1212 18:16:36.314713 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-svwkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cbd858cfd-bgn9w_calico-system(dc52e510-0fd6-4a21-9182-a1df4018bab8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:16:36.314967 containerd[1848]: time="2025-12-12T18:16:36.314901449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:16:36.316062 kubelet[3114]: E1212 18:16:36.316016 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7cbd858cfd-bgn9w" podUID="dc52e510-0fd6-4a21-9182-a1df4018bab8" Dec 12 18:16:36.630709 containerd[1848]: time="2025-12-12T18:16:36.630613768Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:16:36.632494 containerd[1848]: time="2025-12-12T18:16:36.632439260Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:16:36.632591 containerd[1848]: time="2025-12-12T18:16:36.632483476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:16:36.632720 kubelet[3114]: E1212 18:16:36.632681 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:16:36.632991 kubelet[3114]: E1212 18:16:36.632738 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:16:36.633172 kubelet[3114]: E1212 18:16:36.633042 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nsbhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9klxk_calico-system(4cc6cae7-6092-4840-b2c3-065b3bb220f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:16:36.633337 containerd[1848]: time="2025-12-12T18:16:36.633308291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:16:36.634427 kubelet[3114]: E1212 18:16:36.634388 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9klxk" podUID="4cc6cae7-6092-4840-b2c3-065b3bb220f3" Dec 12 18:16:36.967763 containerd[1848]: time="2025-12-12T18:16:36.966930156Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:16:36.968929 containerd[1848]: time="2025-12-12T18:16:36.968868995Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:16:36.969305 containerd[1848]: time="2025-12-12T18:16:36.968897747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:16:36.969492 kubelet[3114]: E1212 18:16:36.969438 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:16:36.969547 kubelet[3114]: E1212 18:16:36.969500 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:16:36.969714 kubelet[3114]: E1212 18:16:36.969670 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8hc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-646c8584fc-2p5wd_calico-apiserver(d5affb4a-a5c2-4140-9517-3b93721ff225): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:16:36.970888 kubelet[3114]: E1212 18:16:36.970835 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-2p5wd" podUID="d5affb4a-a5c2-4140-9517-3b93721ff225" Dec 12 18:16:37.266421 containerd[1848]: time="2025-12-12T18:16:37.266258570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:16:37.603492 containerd[1848]: time="2025-12-12T18:16:37.603094934Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:16:37.605338 containerd[1848]: time="2025-12-12T18:16:37.605275864Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:16:37.605431 containerd[1848]: time="2025-12-12T18:16:37.605348439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:16:37.605724 kubelet[3114]: E1212 18:16:37.605634 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:16:37.605724 kubelet[3114]: E1212 18:16:37.605726 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:16:37.605995 kubelet[3114]: E1212 18:16:37.605900 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wxhcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4vrl2_calico-system(a83fad8b-d566-4d95-b74e-3a16ee22e614): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:16:37.607099 kubelet[3114]: E1212 18:16:37.607066 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vrl2" podUID="a83fad8b-d566-4d95-b74e-3a16ee22e614" Dec 12 18:16:40.266222 containerd[1848]: time="2025-12-12T18:16:40.266166369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:16:40.600646 containerd[1848]: time="2025-12-12T18:16:40.600227177Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:16:40.602457 containerd[1848]: time="2025-12-12T18:16:40.602227157Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:16:40.602457 containerd[1848]: time="2025-12-12T18:16:40.602299877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:16:40.602649 kubelet[3114]: E1212 18:16:40.602443 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:16:40.602649 kubelet[3114]: E1212 18:16:40.602485 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:16:40.603040 kubelet[3114]: E1212 18:16:40.602636 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxfb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-646c8584fc-mwbp6_calico-apiserver(fffa851a-7d3a-4af7-80b6-6f040212a19b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:16:40.603890 kubelet[3114]: E1212 18:16:40.603844 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-mwbp6" podUID="fffa851a-7d3a-4af7-80b6-6f040212a19b" Dec 12 18:16:42.266652 kubelet[3114]: E1212 18:16:42.266578 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5447cc8774-bt5vn" podUID="e457a45d-7eaa-42e2-95fa-b7011451de77" Dec 12 18:16:47.269654 kubelet[3114]: E1212 18:16:47.269602 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-2p5wd" podUID="d5affb4a-a5c2-4140-9517-3b93721ff225" Dec 12 18:16:48.266942 kubelet[3114]: E1212 18:16:48.266869 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9klxk" podUID="4cc6cae7-6092-4840-b2c3-065b3bb220f3" Dec 12 18:16:51.266497 kubelet[3114]: E1212 18:16:51.266446 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vrl2" podUID="a83fad8b-d566-4d95-b74e-3a16ee22e614" Dec 12 18:16:51.267423 kubelet[3114]: E1212 18:16:51.267372 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7cbd858cfd-bgn9w" podUID="dc52e510-0fd6-4a21-9182-a1df4018bab8" Dec 12 18:16:53.265556 kubelet[3114]: E1212 18:16:53.265493 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5447cc8774-bt5vn" podUID="e457a45d-7eaa-42e2-95fa-b7011451de77" Dec 12 18:16:54.266501 kubelet[3114]: E1212 18:16:54.266428 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-mwbp6" podUID="fffa851a-7d3a-4af7-80b6-6f040212a19b" Dec 12 18:17:01.265808 kubelet[3114]: E1212 18:17:01.265714 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-2p5wd" podUID="d5affb4a-a5c2-4140-9517-3b93721ff225" Dec 12 18:17:02.270916 kubelet[3114]: E1212 18:17:02.270865 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9klxk" podUID="4cc6cae7-6092-4840-b2c3-065b3bb220f3" Dec 12 18:17:05.266144 kubelet[3114]: E1212 18:17:05.266104 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vrl2" podUID="a83fad8b-d566-4d95-b74e-3a16ee22e614" Dec 12 18:17:05.267092 kubelet[3114]: E1212 18:17:05.267049 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7cbd858cfd-bgn9w" podUID="dc52e510-0fd6-4a21-9182-a1df4018bab8" Dec 12 18:17:06.266318 kubelet[3114]: E1212 18:17:06.266278 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5447cc8774-bt5vn" podUID="e457a45d-7eaa-42e2-95fa-b7011451de77" Dec 12 18:17:08.266269 kubelet[3114]: E1212 18:17:08.266213 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-mwbp6" podUID="fffa851a-7d3a-4af7-80b6-6f040212a19b" Dec 12 18:17:13.265585 kubelet[3114]: E1212 18:17:13.265536 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-2p5wd" podUID="d5affb4a-a5c2-4140-9517-3b93721ff225" Dec 12 18:17:16.267392 containerd[1848]: time="2025-12-12T18:17:16.267352603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:17:16.614970 containerd[1848]: time="2025-12-12T18:17:16.614640924Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:17:16.616195 containerd[1848]: time="2025-12-12T18:17:16.616157853Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:17:16.616260 containerd[1848]: time="2025-12-12T18:17:16.616233772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:17:16.616427 kubelet[3114]: E1212 18:17:16.616390 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:17:16.617403 kubelet[3114]: E1212 18:17:16.616718 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:17:16.617593 kubelet[3114]: E1212 18:17:16.617563 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nsbhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9klxk_calico-system(4cc6cae7-6092-4840-b2c3-065b3bb220f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:17:16.619296 kubelet[3114]: E1212 18:17:16.619269 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9klxk" podUID="4cc6cae7-6092-4840-b2c3-065b3bb220f3" Dec 12 18:17:18.267022 containerd[1848]: time="2025-12-12T18:17:18.266926122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:17:18.607873 containerd[1848]: time="2025-12-12T18:17:18.607590700Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:17:18.609474 containerd[1848]: time="2025-12-12T18:17:18.609420421Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:17:18.609579 containerd[1848]: time="2025-12-12T18:17:18.609456229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:17:18.609747 kubelet[3114]: E1212 18:17:18.609695 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:17:18.610038 kubelet[3114]: E1212 18:17:18.609765 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:17:18.610038 kubelet[3114]: E1212 18:17:18.609982 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:32f7d598b8bd491099e853ab2d7a3772,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-svwkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cbd858cfd-bgn9w_calico-system(dc52e510-0fd6-4a21-9182-a1df4018bab8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:17:18.610514 containerd[1848]: time="2025-12-12T18:17:18.610488095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:17:18.958479 containerd[1848]: time="2025-12-12T18:17:18.958307170Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:17:18.960083 containerd[1848]: time="2025-12-12T18:17:18.960022728Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:17:18.960199 containerd[1848]: time="2025-12-12T18:17:18.960085760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:17:18.960541 kubelet[3114]: E1212 18:17:18.960237 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:17:18.960541 kubelet[3114]: E1212 18:17:18.960302 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:17:18.960675 kubelet[3114]: E1212 18:17:18.960566 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2cfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5447cc8774-bt5vn_calico-system(e457a45d-7eaa-42e2-95fa-b7011451de77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:17:18.960781 containerd[1848]: time="2025-12-12T18:17:18.960641106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:17:18.961951 kubelet[3114]: E1212 18:17:18.961912 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5447cc8774-bt5vn" podUID="e457a45d-7eaa-42e2-95fa-b7011451de77" Dec 12 18:17:19.265979 kubelet[3114]: E1212 18:17:19.265703 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-mwbp6" podUID="fffa851a-7d3a-4af7-80b6-6f040212a19b" Dec 12 18:17:19.292984 containerd[1848]: time="2025-12-12T18:17:19.292923485Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:17:19.294676 containerd[1848]: time="2025-12-12T18:17:19.294613967Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:17:19.294826 containerd[1848]: time="2025-12-12T18:17:19.294690272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:17:19.294887 kubelet[3114]: E1212 18:17:19.294847 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:17:19.294931 kubelet[3114]: E1212 18:17:19.294900 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:17:19.295201 kubelet[3114]: E1212 18:17:19.295038 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-svwkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cbd858cfd-bgn9w_calico-system(dc52e510-0fd6-4a21-9182-a1df4018bab8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:17:19.296294 kubelet[3114]: E1212 18:17:19.296236 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7cbd858cfd-bgn9w" podUID="dc52e510-0fd6-4a21-9182-a1df4018bab8" Dec 12 18:17:20.267093 containerd[1848]: time="2025-12-12T18:17:20.267027006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:17:20.600624 containerd[1848]: time="2025-12-12T18:17:20.600460608Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:17:20.604126 containerd[1848]: time="2025-12-12T18:17:20.604062198Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:17:20.604302 containerd[1848]: time="2025-12-12T18:17:20.604159047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:17:20.604387 kubelet[3114]: E1212 18:17:20.604340 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:17:20.604720 kubelet[3114]: E1212 18:17:20.604413 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:17:20.604720 kubelet[3114]: E1212 18:17:20.604671 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wxhcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4vrl2_calico-system(a83fad8b-d566-4d95-b74e-3a16ee22e614): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:17:20.605922 kubelet[3114]: E1212 18:17:20.605881 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vrl2" podUID="a83fad8b-d566-4d95-b74e-3a16ee22e614" Dec 12 18:17:26.266757 containerd[1848]: time="2025-12-12T18:17:26.266714530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:17:26.624218 containerd[1848]: time="2025-12-12T18:17:26.624168556Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:17:26.626694 containerd[1848]: time="2025-12-12T18:17:26.626615703Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:17:26.626830 containerd[1848]: time="2025-12-12T18:17:26.626707937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:17:26.627180 kubelet[3114]: E1212 18:17:26.626930 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:17:26.627180 kubelet[3114]: E1212 18:17:26.627075 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:17:26.627507 kubelet[3114]: E1212 18:17:26.627287 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8hc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-646c8584fc-2p5wd_calico-apiserver(d5affb4a-a5c2-4140-9517-3b93721ff225): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:17:26.628523 kubelet[3114]: E1212 18:17:26.628434 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-2p5wd" podUID="d5affb4a-a5c2-4140-9517-3b93721ff225" Dec 12 18:17:29.267324 containerd[1848]: time="2025-12-12T18:17:29.267171058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:17:29.609536 containerd[1848]: time="2025-12-12T18:17:29.609406804Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:17:29.611422 containerd[1848]: time="2025-12-12T18:17:29.611329572Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:17:29.611616 containerd[1848]: time="2025-12-12T18:17:29.611352824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:17:29.611657 kubelet[3114]: E1212 18:17:29.611614 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:17:29.612222 kubelet[3114]: E1212 18:17:29.611671 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:17:29.612222 kubelet[3114]: E1212 18:17:29.611815 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nsbhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9klxk_calico-system(4cc6cae7-6092-4840-b2c3-065b3bb220f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:17:29.612997 kubelet[3114]: E1212 18:17:29.612968 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9klxk" podUID="4cc6cae7-6092-4840-b2c3-065b3bb220f3" Dec 12 18:17:30.266385 kubelet[3114]: E1212 18:17:30.266338 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7cbd858cfd-bgn9w" podUID="dc52e510-0fd6-4a21-9182-a1df4018bab8" Dec 12 18:17:31.266016 containerd[1848]: time="2025-12-12T18:17:31.265971809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:17:31.637843 containerd[1848]: time="2025-12-12T18:17:31.637797390Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:17:31.639582 containerd[1848]: time="2025-12-12T18:17:31.639550597Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:17:31.639678 containerd[1848]: time="2025-12-12T18:17:31.639648455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:17:31.639844 kubelet[3114]: E1212 18:17:31.639803 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:17:31.640125 kubelet[3114]: E1212 18:17:31.639865 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:17:31.640125 kubelet[3114]: E1212 18:17:31.640064 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxfb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-646c8584fc-mwbp6_calico-apiserver(fffa851a-7d3a-4af7-80b6-6f040212a19b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:17:31.642199 kubelet[3114]: E1212 18:17:31.641963 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-mwbp6" podUID="fffa851a-7d3a-4af7-80b6-6f040212a19b" Dec 12 18:17:32.266222 kubelet[3114]: E1212 18:17:32.265878 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vrl2" podUID="a83fad8b-d566-4d95-b74e-3a16ee22e614" Dec 12 18:17:32.266402 kubelet[3114]: E1212 18:17:32.266336 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5447cc8774-bt5vn" podUID="e457a45d-7eaa-42e2-95fa-b7011451de77" Dec 12 18:17:34.691260 systemd[1]: Started sshd@9-10.0.8.19:22-139.178.89.65:43892.service - OpenSSH per-connection server daemon (139.178.89.65:43892). Dec 12 18:17:34.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.8.19:22-139.178.89.65:43892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:17:34.695745 kernel: audit: type=1130 audit(1765563454.689:745): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.8.19:22-139.178.89.65:43892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:17:35.499000 audit[5572]: USER_ACCT pid=5572 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:35.501375 sshd[5572]: Accepted publickey for core from 139.178.89.65 port 43892 ssh2: RSA SHA256:P03RT4f8kIelvaSNmaAKLm8NQmzoiYdI93bUyqHqPCw Dec 12 18:17:35.502750 sshd-session[5572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:17:35.500000 audit[5572]: CRED_ACQ pid=5572 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:35.508512 kernel: audit: type=1101 audit(1765563455.499:746): pid=5572 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:35.508562 kernel: audit: type=1103 audit(1765563455.500:747): pid=5572 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:35.513612 kernel: audit: type=1006 audit(1765563455.500:748): pid=5572 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 12 18:17:35.500000 audit[5572]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3c7cb620 a2=3 a3=0 items=0 ppid=1 pid=5572 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:17:35.515617 systemd-logind[1824]: New session 10 of user core. Dec 12 18:17:35.517645 kernel: audit: type=1300 audit(1765563455.500:748): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3c7cb620 a2=3 a3=0 items=0 ppid=1 pid=5572 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:17:35.500000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:17:35.521696 kernel: audit: type=1327 audit(1765563455.500:748): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:17:35.532968 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 18:17:35.533000 audit[5572]: USER_START pid=5572 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:35.535000 audit[5575]: CRED_ACQ pid=5575 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:35.551124 kernel: audit: type=1105 audit(1765563455.533:749): pid=5572 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:35.551205 kernel: audit: type=1103 audit(1765563455.535:750): pid=5575 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:36.034320 sshd[5575]: Connection closed by 139.178.89.65 port 43892 Dec 12 18:17:36.034716 sshd-session[5572]: pam_unix(sshd:session): session closed for user core Dec 12 18:17:36.034000 audit[5572]: USER_END pid=5572 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:36.039263 systemd[1]: sshd@9-10.0.8.19:22-139.178.89.65:43892.service: Deactivated successfully. Dec 12 18:17:36.040924 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 18:17:36.041575 systemd-logind[1824]: Session 10 logged out. Waiting for processes to exit. Dec 12 18:17:36.034000 audit[5572]: CRED_DISP pid=5572 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:36.042349 systemd-logind[1824]: Removed session 10. Dec 12 18:17:36.043529 kernel: audit: type=1106 audit(1765563456.034:751): pid=5572 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:36.043584 kernel: audit: type=1104 audit(1765563456.034:752): pid=5572 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:36.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.8.19:22-139.178.89.65:43892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:17:39.265749 kubelet[3114]: E1212 18:17:39.265644 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-2p5wd" podUID="d5affb4a-a5c2-4140-9517-3b93721ff225" Dec 12 18:17:40.267228 kubelet[3114]: E1212 18:17:40.267142 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9klxk" podUID="4cc6cae7-6092-4840-b2c3-065b3bb220f3" Dec 12 18:17:41.200287 systemd[1]: Started sshd@10-10.0.8.19:22-139.178.89.65:48906.service - OpenSSH per-connection server daemon (139.178.89.65:48906). Dec 12 18:17:41.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.8.19:22-139.178.89.65:48906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:17:41.201118 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:17:41.201157 kernel: audit: type=1130 audit(1765563461.198:754): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.8.19:22-139.178.89.65:48906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:17:42.009000 audit[5593]: USER_ACCT pid=5593 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:42.011348 sshd[5593]: Accepted publickey for core from 139.178.89.65 port 48906 ssh2: RSA SHA256:P03RT4f8kIelvaSNmaAKLm8NQmzoiYdI93bUyqHqPCw Dec 12 18:17:42.012903 sshd-session[5593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:17:42.010000 audit[5593]: CRED_ACQ pid=5593 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:42.017208 kernel: audit: type=1101 audit(1765563462.009:755): pid=5593 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:42.017348 kernel: audit: type=1103 audit(1765563462.010:756): pid=5593 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:42.018298 systemd-logind[1824]: New session 11 of user core. Dec 12 18:17:42.010000 audit[5593]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc318c6580 a2=3 a3=0 items=0 ppid=1 pid=5593 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:17:42.024323 kernel: audit: type=1006 audit(1765563462.010:757): pid=5593 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 12 18:17:42.024416 kernel: audit: type=1300 audit(1765563462.010:757): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc318c6580 a2=3 a3=0 items=0 ppid=1 pid=5593 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:17:42.010000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:17:42.028543 kernel: audit: type=1327 audit(1765563462.010:757): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:17:42.031923 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 18:17:42.032000 audit[5593]: USER_START pid=5593 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:42.034000 audit[5596]: CRED_ACQ pid=5596 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:42.040417 kernel: audit: type=1105 audit(1765563462.032:758): pid=5593 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:42.040506 kernel: audit: type=1103 audit(1765563462.034:759): pid=5596 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:42.268191 kubelet[3114]: E1212 18:17:42.267621 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7cbd858cfd-bgn9w" podUID="dc52e510-0fd6-4a21-9182-a1df4018bab8" Dec 12 18:17:42.537372 sshd[5596]: Connection closed by 139.178.89.65 port 48906 Dec 12 18:17:42.537294 sshd-session[5593]: pam_unix(sshd:session): session closed for user core Dec 12 18:17:42.536000 audit[5593]: USER_END pid=5593 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:42.540754 systemd[1]: sshd@10-10.0.8.19:22-139.178.89.65:48906.service: Deactivated successfully. Dec 12 18:17:42.542464 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 18:17:42.543284 systemd-logind[1824]: Session 11 logged out. Waiting for processes to exit. Dec 12 18:17:42.544294 systemd-logind[1824]: Removed session 11. Dec 12 18:17:42.536000 audit[5593]: CRED_DISP pid=5593 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:42.545736 kernel: audit: type=1106 audit(1765563462.536:760): pid=5593 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:42.545803 kernel: audit: type=1104 audit(1765563462.536:761): pid=5593 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:42.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.8.19:22-139.178.89.65:48906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:17:42.709286 systemd[1]: Started sshd@11-10.0.8.19:22-139.178.89.65:48908.service - OpenSSH per-connection server daemon (139.178.89.65:48908). Dec 12 18:17:42.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.8.19:22-139.178.89.65:48908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:17:43.510000 audit[5614]: USER_ACCT pid=5614 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:43.512568 sshd[5614]: Accepted publickey for core from 139.178.89.65 port 48908 ssh2: RSA SHA256:P03RT4f8kIelvaSNmaAKLm8NQmzoiYdI93bUyqHqPCw Dec 12 18:17:43.511000 audit[5614]: CRED_ACQ pid=5614 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:43.511000 audit[5614]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5716ab20 a2=3 a3=0 items=0 ppid=1 pid=5614 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:17:43.511000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:17:43.514150 sshd-session[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:17:43.518849 systemd-logind[1824]: New session 12 of user core. Dec 12 18:17:43.529878 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 18:17:43.530000 audit[5614]: USER_START pid=5614 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:43.532000 audit[5619]: CRED_ACQ pid=5619 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:44.059848 sshd[5619]: Connection closed by 139.178.89.65 port 48908 Dec 12 18:17:44.059513 sshd-session[5614]: pam_unix(sshd:session): session closed for user core Dec 12 18:17:44.060000 audit[5614]: USER_END pid=5614 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:44.061000 audit[5614]: CRED_DISP pid=5614 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:44.064450 systemd[1]: sshd@11-10.0.8.19:22-139.178.89.65:48908.service: Deactivated successfully. Dec 12 18:17:44.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.8.19:22-139.178.89.65:48908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:17:44.066280 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 18:17:44.067521 systemd-logind[1824]: Session 12 logged out. Waiting for processes to exit. Dec 12 18:17:44.068386 systemd-logind[1824]: Removed session 12. Dec 12 18:17:44.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.8.19:22-139.178.89.65:48912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:17:44.241424 systemd[1]: Started sshd@12-10.0.8.19:22-139.178.89.65:48912.service - OpenSSH per-connection server daemon (139.178.89.65:48912). Dec 12 18:17:45.081000 audit[5634]: USER_ACCT pid=5634 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:45.083486 sshd[5634]: Accepted publickey for core from 139.178.89.65 port 48912 ssh2: RSA SHA256:P03RT4f8kIelvaSNmaAKLm8NQmzoiYdI93bUyqHqPCw Dec 12 18:17:45.082000 audit[5634]: CRED_ACQ pid=5634 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:45.082000 audit[5634]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb07e7250 a2=3 a3=0 items=0 ppid=1 pid=5634 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:17:45.082000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:17:45.084525 sshd-session[5634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:17:45.088433 systemd-logind[1824]: New session 13 of user core. Dec 12 18:17:45.097920 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 18:17:45.098000 audit[5634]: USER_START pid=5634 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:45.100000 audit[5637]: CRED_ACQ pid=5637 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:45.622898 sshd[5637]: Connection closed by 139.178.89.65 port 48912 Dec 12 18:17:45.623202 sshd-session[5634]: pam_unix(sshd:session): session closed for user core Dec 12 18:17:45.622000 audit[5634]: USER_END pid=5634 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:45.622000 audit[5634]: CRED_DISP pid=5634 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:45.626548 systemd[1]: sshd@12-10.0.8.19:22-139.178.89.65:48912.service: Deactivated successfully. Dec 12 18:17:45.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.8.19:22-139.178.89.65:48912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:17:45.628146 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 18:17:45.628810 systemd-logind[1824]: Session 13 logged out. Waiting for processes to exit. Dec 12 18:17:45.629641 systemd-logind[1824]: Removed session 13. Dec 12 18:17:46.267015 kubelet[3114]: E1212 18:17:46.266972 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-mwbp6" podUID="fffa851a-7d3a-4af7-80b6-6f040212a19b" Dec 12 18:17:46.267450 kubelet[3114]: E1212 18:17:46.267063 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5447cc8774-bt5vn" podUID="e457a45d-7eaa-42e2-95fa-b7011451de77" Dec 12 18:17:46.267450 kubelet[3114]: E1212 18:17:46.267082 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vrl2" podUID="a83fad8b-d566-4d95-b74e-3a16ee22e614" Dec 12 18:17:50.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.8.19:22-139.178.89.65:57110 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:17:50.785274 systemd[1]: Started sshd@13-10.0.8.19:22-139.178.89.65:57110.service - OpenSSH per-connection server daemon (139.178.89.65:57110). Dec 12 18:17:50.786202 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 12 18:17:50.786241 kernel: audit: type=1130 audit(1765563470.783:781): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.8.19:22-139.178.89.65:57110 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:17:51.266203 kubelet[3114]: E1212 18:17:51.266155 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9klxk" podUID="4cc6cae7-6092-4840-b2c3-065b3bb220f3" Dec 12 18:17:51.610000 audit[5688]: USER_ACCT pid=5688 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:51.613143 sshd[5688]: Accepted publickey for core from 139.178.89.65 port 57110 ssh2: RSA SHA256:P03RT4f8kIelvaSNmaAKLm8NQmzoiYdI93bUyqHqPCw Dec 12 18:17:51.615396 sshd-session[5688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:17:51.617717 kernel: audit: type=1101 audit(1765563471.610:782): pid=5688 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:51.617817 kernel: audit: type=1103 audit(1765563471.612:783): pid=5688 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:51.612000 audit[5688]: CRED_ACQ pid=5688 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:51.621849 kernel: audit: type=1006 audit(1765563471.612:784): pid=5688 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 12 18:17:51.622194 systemd-logind[1824]: New session 14 of user core. Dec 12 18:17:51.612000 audit[5688]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2eeb0b20 a2=3 a3=0 items=0 ppid=1 pid=5688 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:17:51.626549 kernel: audit: type=1300 audit(1765563471.612:784): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2eeb0b20 a2=3 a3=0 items=0 ppid=1 pid=5688 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:17:51.612000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:17:51.630082 kernel: audit: type=1327 audit(1765563471.612:784): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:17:51.631892 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 18:17:51.633000 audit[5688]: USER_START pid=5688 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:51.636000 audit[5691]: CRED_ACQ pid=5691 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:51.642886 kernel: audit: type=1105 audit(1765563471.633:785): pid=5688 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:51.643006 kernel: audit: type=1103 audit(1765563471.636:786): pid=5691 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:52.140853 sshd[5691]: Connection closed by 139.178.89.65 port 57110 Dec 12 18:17:52.141222 sshd-session[5688]: pam_unix(sshd:session): session closed for user core Dec 12 18:17:52.140000 audit[5688]: USER_END pid=5688 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:52.144952 systemd[1]: sshd@13-10.0.8.19:22-139.178.89.65:57110.service: Deactivated successfully. Dec 12 18:17:52.147003 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 18:17:52.147692 kernel: audit: type=1106 audit(1765563472.140:787): pid=5688 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:52.147766 kernel: audit: type=1104 audit(1765563472.140:788): pid=5688 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:52.140000 audit[5688]: CRED_DISP pid=5688 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:52.147903 systemd-logind[1824]: Session 14 logged out. Waiting for processes to exit. Dec 12 18:17:52.149189 systemd-logind[1824]: Removed session 14. Dec 12 18:17:52.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.8.19:22-139.178.89.65:57110 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:17:54.269681 kubelet[3114]: E1212 18:17:54.268913 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-2p5wd" podUID="d5affb4a-a5c2-4140-9517-3b93721ff225" Dec 12 18:17:55.267220 kubelet[3114]: E1212 18:17:55.267161 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7cbd858cfd-bgn9w" podUID="dc52e510-0fd6-4a21-9182-a1df4018bab8" Dec 12 18:17:57.312441 systemd[1]: Started sshd@14-10.0.8.19:22-139.178.89.65:57116.service - OpenSSH per-connection server daemon (139.178.89.65:57116). Dec 12 18:17:57.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.8.19:22-139.178.89.65:57116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:17:57.313374 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:17:57.313424 kernel: audit: type=1130 audit(1765563477.310:790): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.8.19:22-139.178.89.65:57116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:17:58.136000 audit[5708]: USER_ACCT pid=5708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:58.138409 sshd[5708]: Accepted publickey for core from 139.178.89.65 port 57116 ssh2: RSA SHA256:P03RT4f8kIelvaSNmaAKLm8NQmzoiYdI93bUyqHqPCw Dec 12 18:17:58.139399 sshd-session[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:17:58.137000 audit[5708]: CRED_ACQ pid=5708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:58.144742 kernel: audit: type=1101 audit(1765563478.136:791): pid=5708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:58.144802 kernel: audit: type=1103 audit(1765563478.137:792): pid=5708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:58.144748 systemd-logind[1824]: New session 15 of user core. Dec 12 18:17:58.148752 kernel: audit: type=1006 audit(1765563478.137:793): pid=5708 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 12 18:17:58.137000 audit[5708]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbc2aee70 a2=3 a3=0 items=0 ppid=1 pid=5708 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:17:58.150705 kernel: audit: type=1300 audit(1765563478.137:793): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbc2aee70 a2=3 a3=0 items=0 ppid=1 pid=5708 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:17:58.137000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:17:58.155981 kernel: audit: type=1327 audit(1765563478.137:793): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:17:58.157349 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 18:17:58.157000 audit[5708]: USER_START pid=5708 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:58.164684 kernel: audit: type=1105 audit(1765563478.157:794): pid=5708 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:58.164857 kernel: audit: type=1103 audit(1765563478.159:795): pid=5711 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:58.159000 audit[5711]: CRED_ACQ pid=5711 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:58.266253 kubelet[3114]: E1212 18:17:58.266201 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vrl2" podUID="a83fad8b-d566-4d95-b74e-3a16ee22e614" Dec 12 18:17:58.670211 sshd[5711]: Connection closed by 139.178.89.65 port 57116 Dec 12 18:17:58.670750 sshd-session[5708]: pam_unix(sshd:session): session closed for user core Dec 12 18:17:58.670000 audit[5708]: USER_END pid=5708 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:58.674443 systemd[1]: sshd@14-10.0.8.19:22-139.178.89.65:57116.service: Deactivated successfully. Dec 12 18:17:58.676056 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 18:17:58.677209 systemd-logind[1824]: Session 15 logged out. Waiting for processes to exit. Dec 12 18:17:58.670000 audit[5708]: CRED_DISP pid=5708 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:58.677903 systemd-logind[1824]: Removed session 15. Dec 12 18:17:58.678846 kernel: audit: type=1106 audit(1765563478.670:796): pid=5708 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:58.678897 kernel: audit: type=1104 audit(1765563478.670:797): pid=5708 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:17:58.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.8.19:22-139.178.89.65:57116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:00.265775 kubelet[3114]: E1212 18:18:00.265729 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5447cc8774-bt5vn" podUID="e457a45d-7eaa-42e2-95fa-b7011451de77" Dec 12 18:18:01.265682 kubelet[3114]: E1212 18:18:01.265627 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-mwbp6" podUID="fffa851a-7d3a-4af7-80b6-6f040212a19b" Dec 12 18:18:03.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.8.19:22-139.178.89.65:34406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:03.851961 systemd[1]: Started sshd@15-10.0.8.19:22-139.178.89.65:34406.service - OpenSSH per-connection server daemon (139.178.89.65:34406). Dec 12 18:18:03.853184 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:18:03.853231 kernel: audit: type=1130 audit(1765563483.850:799): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.8.19:22-139.178.89.65:34406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:04.690000 audit[5728]: USER_ACCT pid=5728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:04.692573 sshd[5728]: Accepted publickey for core from 139.178.89.65 port 34406 ssh2: RSA SHA256:P03RT4f8kIelvaSNmaAKLm8NQmzoiYdI93bUyqHqPCw Dec 12 18:18:04.694001 sshd-session[5728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:18:04.691000 audit[5728]: CRED_ACQ pid=5728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:04.699214 kernel: audit: type=1101 audit(1765563484.690:800): pid=5728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:04.699309 kernel: audit: type=1103 audit(1765563484.691:801): pid=5728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:04.698945 systemd-logind[1824]: New session 16 of user core. Dec 12 18:18:04.701763 kernel: audit: type=1006 audit(1765563484.691:802): pid=5728 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 12 18:18:04.691000 audit[5728]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0a41ee50 a2=3 a3=0 items=0 ppid=1 pid=5728 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:04.707425 kernel: audit: type=1300 audit(1765563484.691:802): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0a41ee50 a2=3 a3=0 items=0 ppid=1 pid=5728 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:04.691000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:18:04.711038 kernel: audit: type=1327 audit(1765563484.691:802): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:18:04.712038 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 18:18:04.712000 audit[5728]: USER_START pid=5728 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:04.714000 audit[5731]: CRED_ACQ pid=5731 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:04.721016 kernel: audit: type=1105 audit(1765563484.712:803): pid=5728 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:04.721166 kernel: audit: type=1103 audit(1765563484.714:804): pid=5731 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:05.240683 sshd[5731]: Connection closed by 139.178.89.65 port 34406 Dec 12 18:18:05.238865 sshd-session[5728]: pam_unix(sshd:session): session closed for user core Dec 12 18:18:05.240000 audit[5728]: USER_END pid=5728 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:05.244415 systemd[1]: sshd@15-10.0.8.19:22-139.178.89.65:34406.service: Deactivated successfully. Dec 12 18:18:05.247810 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 18:18:05.240000 audit[5728]: CRED_DISP pid=5728 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:05.249004 systemd-logind[1824]: Session 16 logged out. Waiting for processes to exit. Dec 12 18:18:05.250023 kernel: audit: type=1106 audit(1765563485.240:805): pid=5728 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:05.250074 kernel: audit: type=1104 audit(1765563485.240:806): pid=5728 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:05.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.8.19:22-139.178.89.65:34406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:05.254048 systemd-logind[1824]: Removed session 16. Dec 12 18:18:05.265812 kubelet[3114]: E1212 18:18:05.265747 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9klxk" podUID="4cc6cae7-6092-4840-b2c3-065b3bb220f3" Dec 12 18:18:05.405137 systemd[1]: Started sshd@16-10.0.8.19:22-139.178.89.65:34416.service - OpenSSH per-connection server daemon (139.178.89.65:34416). Dec 12 18:18:05.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.8.19:22-139.178.89.65:34416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:06.211000 audit[5748]: USER_ACCT pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:06.213227 sshd[5748]: Accepted publickey for core from 139.178.89.65 port 34416 ssh2: RSA SHA256:P03RT4f8kIelvaSNmaAKLm8NQmzoiYdI93bUyqHqPCw Dec 12 18:18:06.212000 audit[5748]: CRED_ACQ pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:06.212000 audit[5748]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1c98a150 a2=3 a3=0 items=0 ppid=1 pid=5748 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:06.212000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:18:06.214410 sshd-session[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:18:06.219409 systemd-logind[1824]: New session 17 of user core. Dec 12 18:18:06.233898 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 18:18:06.234000 audit[5748]: USER_START pid=5748 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:06.236000 audit[5751]: CRED_ACQ pid=5751 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:06.793815 sshd[5751]: Connection closed by 139.178.89.65 port 34416 Dec 12 18:18:06.794853 sshd-session[5748]: pam_unix(sshd:session): session closed for user core Dec 12 18:18:06.795000 audit[5748]: USER_END pid=5748 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:06.795000 audit[5748]: CRED_DISP pid=5748 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:06.798989 systemd[1]: sshd@16-10.0.8.19:22-139.178.89.65:34416.service: Deactivated successfully. Dec 12 18:18:06.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.8.19:22-139.178.89.65:34416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:06.800722 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 18:18:06.802810 systemd-logind[1824]: Session 17 logged out. Waiting for processes to exit. Dec 12 18:18:06.804897 systemd-logind[1824]: Removed session 17. Dec 12 18:18:06.960100 systemd[1]: Started sshd@17-10.0.8.19:22-139.178.89.65:34422.service - OpenSSH per-connection server daemon (139.178.89.65:34422). Dec 12 18:18:06.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.8.19:22-139.178.89.65:34422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:07.266705 kubelet[3114]: E1212 18:18:07.266025 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-2p5wd" podUID="d5affb4a-a5c2-4140-9517-3b93721ff225" Dec 12 18:18:07.267094 kubelet[3114]: E1212 18:18:07.266969 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7cbd858cfd-bgn9w" podUID="dc52e510-0fd6-4a21-9182-a1df4018bab8" Dec 12 18:18:07.766000 audit[5768]: USER_ACCT pid=5768 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:07.768386 sshd[5768]: Accepted publickey for core from 139.178.89.65 port 34422 ssh2: RSA SHA256:P03RT4f8kIelvaSNmaAKLm8NQmzoiYdI93bUyqHqPCw Dec 12 18:18:07.767000 audit[5768]: CRED_ACQ pid=5768 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:07.767000 audit[5768]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2a883400 a2=3 a3=0 items=0 ppid=1 pid=5768 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:07.767000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:18:07.770425 sshd-session[5768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:18:07.775967 systemd-logind[1824]: New session 18 of user core. Dec 12 18:18:07.791938 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 18:18:07.792000 audit[5768]: USER_START pid=5768 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:07.794000 audit[5771]: CRED_ACQ pid=5771 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:08.639000 audit[5787]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5787 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:18:08.639000 audit[5787]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffdf265b280 a2=0 a3=7ffdf265b26c items=0 ppid=3256 pid=5787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:08.639000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:18:08.647000 audit[5787]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5787 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:18:08.647000 audit[5787]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdf265b280 a2=0 a3=0 items=0 ppid=3256 pid=5787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:08.647000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:18:08.671000 audit[5789]: NETFILTER_CFG table=filter:146 family=2 entries=38 op=nft_register_rule pid=5789 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:18:08.671000 audit[5789]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc02a23180 a2=0 a3=7ffc02a2316c items=0 ppid=3256 pid=5789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:08.671000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:18:08.686000 audit[5789]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5789 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:18:08.686000 audit[5789]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc02a23180 a2=0 a3=0 items=0 ppid=3256 pid=5789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:08.686000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:18:08.804801 sshd[5771]: Connection closed by 139.178.89.65 port 34422 Dec 12 18:18:08.805156 sshd-session[5768]: pam_unix(sshd:session): session closed for user core Dec 12 18:18:08.804000 audit[5768]: USER_END pid=5768 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:08.804000 audit[5768]: CRED_DISP pid=5768 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:08.808744 systemd[1]: sshd@17-10.0.8.19:22-139.178.89.65:34422.service: Deactivated successfully. Dec 12 18:18:08.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.8.19:22-139.178.89.65:34422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:08.810328 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 18:18:08.811001 systemd-logind[1824]: Session 18 logged out. Waiting for processes to exit. Dec 12 18:18:08.812121 systemd-logind[1824]: Removed session 18. Dec 12 18:18:08.977842 systemd[1]: Started sshd@18-10.0.8.19:22-139.178.89.65:34424.service - OpenSSH per-connection server daemon (139.178.89.65:34424). Dec 12 18:18:08.979333 kernel: kauditd_printk_skb: 35 callbacks suppressed Dec 12 18:18:08.979426 kernel: audit: type=1130 audit(1765563488.976:830): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.8.19:22-139.178.89.65:34424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:08.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.8.19:22-139.178.89.65:34424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:09.797000 audit[5794]: USER_ACCT pid=5794 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:09.799226 sshd[5794]: Accepted publickey for core from 139.178.89.65 port 34424 ssh2: RSA SHA256:P03RT4f8kIelvaSNmaAKLm8NQmzoiYdI93bUyqHqPCw Dec 12 18:18:09.800381 sshd-session[5794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:18:09.798000 audit[5794]: CRED_ACQ pid=5794 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:09.804595 systemd-logind[1824]: New session 19 of user core. Dec 12 18:18:09.805268 kernel: audit: type=1101 audit(1765563489.797:831): pid=5794 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:09.805329 kernel: audit: type=1103 audit(1765563489.798:832): pid=5794 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:09.809261 kernel: audit: type=1006 audit(1765563489.798:833): pid=5794 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 12 18:18:09.798000 audit[5794]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0e5a4e30 a2=3 a3=0 items=0 ppid=1 pid=5794 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:09.813605 kernel: audit: type=1300 audit(1765563489.798:833): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0e5a4e30 a2=3 a3=0 items=0 ppid=1 pid=5794 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:09.798000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:18:09.817023 kernel: audit: type=1327 audit(1765563489.798:833): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:18:09.823942 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 18:18:09.824000 audit[5794]: USER_START pid=5794 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:09.826000 audit[5797]: CRED_ACQ pid=5797 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:09.832176 kernel: audit: type=1105 audit(1765563489.824:834): pid=5794 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:09.832288 kernel: audit: type=1103 audit(1765563489.826:835): pid=5797 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:10.415736 sshd[5797]: Connection closed by 139.178.89.65 port 34424 Dec 12 18:18:10.416164 sshd-session[5794]: pam_unix(sshd:session): session closed for user core Dec 12 18:18:10.417000 audit[5794]: USER_END pid=5794 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:10.421540 systemd[1]: sshd@18-10.0.8.19:22-139.178.89.65:34424.service: Deactivated successfully. Dec 12 18:18:10.423158 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 18:18:10.417000 audit[5794]: CRED_DISP pid=5794 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:10.423977 systemd-logind[1824]: Session 19 logged out. Waiting for processes to exit. Dec 12 18:18:10.425252 kernel: audit: type=1106 audit(1765563490.417:836): pid=5794 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:10.425311 kernel: audit: type=1104 audit(1765563490.417:837): pid=5794 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:10.425712 systemd-logind[1824]: Removed session 19. Dec 12 18:18:10.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.8.19:22-139.178.89.65:34424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:10.592757 systemd[1]: Started sshd@19-10.0.8.19:22-139.178.89.65:48356.service - OpenSSH per-connection server daemon (139.178.89.65:48356). Dec 12 18:18:10.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.8.19:22-139.178.89.65:48356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:11.404000 audit[5813]: USER_ACCT pid=5813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:11.406468 sshd[5813]: Accepted publickey for core from 139.178.89.65 port 48356 ssh2: RSA SHA256:P03RT4f8kIelvaSNmaAKLm8NQmzoiYdI93bUyqHqPCw Dec 12 18:18:11.405000 audit[5813]: CRED_ACQ pid=5813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:11.405000 audit[5813]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1fe26af0 a2=3 a3=0 items=0 ppid=1 pid=5813 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:11.405000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:18:11.407415 sshd-session[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:18:11.411351 systemd-logind[1824]: New session 20 of user core. Dec 12 18:18:11.427921 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 18:18:11.429000 audit[5813]: USER_START pid=5813 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:11.431000 audit[5816]: CRED_ACQ pid=5816 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:11.933685 sshd[5816]: Connection closed by 139.178.89.65 port 48356 Dec 12 18:18:11.934244 sshd-session[5813]: pam_unix(sshd:session): session closed for user core Dec 12 18:18:11.933000 audit[5813]: USER_END pid=5813 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:11.933000 audit[5813]: CRED_DISP pid=5813 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:11.937792 systemd[1]: sshd@19-10.0.8.19:22-139.178.89.65:48356.service: Deactivated successfully. Dec 12 18:18:11.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.8.19:22-139.178.89.65:48356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:11.939394 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 18:18:11.940039 systemd-logind[1824]: Session 20 logged out. Waiting for processes to exit. Dec 12 18:18:11.940857 systemd-logind[1824]: Removed session 20. Dec 12 18:18:13.265847 kubelet[3114]: E1212 18:18:13.265777 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5447cc8774-bt5vn" podUID="e457a45d-7eaa-42e2-95fa-b7011451de77" Dec 12 18:18:13.265847 kubelet[3114]: E1212 18:18:13.265829 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vrl2" podUID="a83fad8b-d566-4d95-b74e-3a16ee22e614" Dec 12 18:18:13.265000 audit[5835]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5835 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:18:13.265000 audit[5835]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdbe5897a0 a2=0 a3=7ffdbe58978c items=0 ppid=3256 pid=5835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:13.265000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:18:13.281000 audit[5835]: NETFILTER_CFG table=nat:149 family=2 entries=104 op=nft_register_chain pid=5835 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:18:13.281000 audit[5835]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffdbe5897a0 a2=0 a3=7ffdbe58978c items=0 ppid=3256 pid=5835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:13.281000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:18:16.267159 kubelet[3114]: E1212 18:18:16.267078 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-mwbp6" podUID="fffa851a-7d3a-4af7-80b6-6f040212a19b" Dec 12 18:18:16.268016 kubelet[3114]: E1212 18:18:16.267978 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9klxk" podUID="4cc6cae7-6092-4840-b2c3-065b3bb220f3" Dec 12 18:18:17.103516 systemd[1]: Started sshd@20-10.0.8.19:22-139.178.89.65:48370.service - OpenSSH per-connection server daemon (139.178.89.65:48370). Dec 12 18:18:17.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.8.19:22-139.178.89.65:48370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:17.105205 kernel: kauditd_printk_skb: 18 callbacks suppressed Dec 12 18:18:17.105261 kernel: audit: type=1130 audit(1765563497.103:850): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.8.19:22-139.178.89.65:48370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:17.940000 audit[5837]: USER_ACCT pid=5837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:17.941054 sshd[5837]: Accepted publickey for core from 139.178.89.65 port 48370 ssh2: RSA SHA256:P03RT4f8kIelvaSNmaAKLm8NQmzoiYdI93bUyqHqPCw Dec 12 18:18:17.942213 sshd-session[5837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:18:17.941000 audit[5837]: CRED_ACQ pid=5837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:17.946756 systemd-logind[1824]: New session 21 of user core. Dec 12 18:18:17.947278 kernel: audit: type=1101 audit(1765563497.940:851): pid=5837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:17.947334 kernel: audit: type=1103 audit(1765563497.941:852): pid=5837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:17.951398 kernel: audit: type=1006 audit(1765563497.941:853): pid=5837 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 12 18:18:17.941000 audit[5837]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce29a2210 a2=3 a3=0 items=0 ppid=1 pid=5837 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:17.963900 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 18:18:17.965742 kernel: audit: type=1300 audit(1765563497.941:853): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce29a2210 a2=3 a3=0 items=0 ppid=1 pid=5837 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:17.941000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:18:17.972685 kernel: audit: type=1327 audit(1765563497.941:853): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:18:17.968000 audit[5837]: USER_START pid=5837 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:17.972000 audit[5840]: CRED_ACQ pid=5840 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:17.980329 kernel: audit: type=1105 audit(1765563497.968:854): pid=5837 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:17.980436 kernel: audit: type=1103 audit(1765563497.972:855): pid=5840 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:18.474716 sshd[5840]: Connection closed by 139.178.89.65 port 48370 Dec 12 18:18:18.476366 sshd-session[5837]: pam_unix(sshd:session): session closed for user core Dec 12 18:18:18.492000 audit[5837]: USER_END pid=5837 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:18.496946 systemd[1]: sshd@20-10.0.8.19:22-139.178.89.65:48370.service: Deactivated successfully. Dec 12 18:18:18.499678 kernel: audit: type=1106 audit(1765563498.492:856): pid=5837 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:18.499744 kernel: audit: type=1104 audit(1765563498.492:857): pid=5837 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:18.492000 audit[5837]: CRED_DISP pid=5837 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:18.500251 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 18:18:18.501225 systemd-logind[1824]: Session 21 logged out. Waiting for processes to exit. Dec 12 18:18:18.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.8.19:22-139.178.89.65:48370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:18.502804 systemd-logind[1824]: Removed session 21. Dec 12 18:18:21.265341 kubelet[3114]: E1212 18:18:21.265303 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-2p5wd" podUID="d5affb4a-a5c2-4140-9517-3b93721ff225" Dec 12 18:18:22.266256 kubelet[3114]: E1212 18:18:22.266193 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7cbd858cfd-bgn9w" podUID="dc52e510-0fd6-4a21-9182-a1df4018bab8" Dec 12 18:18:23.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.8.19:22-139.178.89.65:36450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:23.646225 systemd[1]: Started sshd@21-10.0.8.19:22-139.178.89.65:36450.service - OpenSSH per-connection server daemon (139.178.89.65:36450). Dec 12 18:18:23.647479 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:18:23.647521 kernel: audit: type=1130 audit(1765563503.645:859): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.8.19:22-139.178.89.65:36450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:24.480000 audit[5886]: USER_ACCT pid=5886 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:24.481473 sshd[5886]: Accepted publickey for core from 139.178.89.65 port 36450 ssh2: RSA SHA256:P03RT4f8kIelvaSNmaAKLm8NQmzoiYdI93bUyqHqPCw Dec 12 18:18:24.482549 sshd-session[5886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:18:24.481000 audit[5886]: CRED_ACQ pid=5886 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:24.488290 kernel: audit: type=1101 audit(1765563504.480:860): pid=5886 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:24.488366 kernel: audit: type=1103 audit(1765563504.481:861): pid=5886 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:24.487876 systemd-logind[1824]: New session 22 of user core. Dec 12 18:18:24.491167 kernel: audit: type=1006 audit(1765563504.481:862): pid=5886 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 12 18:18:24.481000 audit[5886]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe456c3980 a2=3 a3=0 items=0 ppid=1 pid=5886 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:24.494581 kernel: audit: type=1300 audit(1765563504.481:862): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe456c3980 a2=3 a3=0 items=0 ppid=1 pid=5886 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:24.481000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:18:24.497847 kernel: audit: type=1327 audit(1765563504.481:862): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:18:24.499896 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 18:18:24.501000 audit[5886]: USER_START pid=5886 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:24.507681 kernel: audit: type=1105 audit(1765563504.501:863): pid=5886 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:24.507000 audit[5889]: CRED_ACQ pid=5889 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:24.511684 kernel: audit: type=1103 audit(1765563504.507:864): pid=5889 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:25.003024 sshd[5889]: Connection closed by 139.178.89.65 port 36450 Dec 12 18:18:25.003420 sshd-session[5886]: pam_unix(sshd:session): session closed for user core Dec 12 18:18:25.004000 audit[5886]: USER_END pid=5886 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:25.008011 systemd[1]: sshd@21-10.0.8.19:22-139.178.89.65:36450.service: Deactivated successfully. Dec 12 18:18:25.009874 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 18:18:25.011818 systemd-logind[1824]: Session 22 logged out. Waiting for processes to exit. Dec 12 18:18:25.012466 systemd-logind[1824]: Removed session 22. Dec 12 18:18:25.005000 audit[5886]: CRED_DISP pid=5886 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:25.018198 kernel: audit: type=1106 audit(1765563505.004:865): pid=5886 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:25.018318 kernel: audit: type=1104 audit(1765563505.005:866): pid=5886 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:25.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.8.19:22-139.178.89.65:36450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:27.266222 kubelet[3114]: E1212 18:18:27.266161 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vrl2" podUID="a83fad8b-d566-4d95-b74e-3a16ee22e614" Dec 12 18:18:28.266198 kubelet[3114]: E1212 18:18:28.266149 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5447cc8774-bt5vn" podUID="e457a45d-7eaa-42e2-95fa-b7011451de77" Dec 12 18:18:29.266186 kubelet[3114]: E1212 18:18:29.266147 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9klxk" podUID="4cc6cae7-6092-4840-b2c3-065b3bb220f3" Dec 12 18:18:30.171882 systemd[1]: Started sshd@22-10.0.8.19:22-139.178.89.65:36464.service - OpenSSH per-connection server daemon (139.178.89.65:36464). Dec 12 18:18:30.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.8.19:22-139.178.89.65:36464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:30.172855 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:18:30.172916 kernel: audit: type=1130 audit(1765563510.171:868): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.8.19:22-139.178.89.65:36464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:30.266916 kubelet[3114]: E1212 18:18:30.266143 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-mwbp6" podUID="fffa851a-7d3a-4af7-80b6-6f040212a19b" Dec 12 18:18:30.994000 audit[5912]: USER_ACCT pid=5912 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:30.995866 sshd[5912]: Accepted publickey for core from 139.178.89.65 port 36464 ssh2: RSA SHA256:P03RT4f8kIelvaSNmaAKLm8NQmzoiYdI93bUyqHqPCw Dec 12 18:18:30.997073 sshd-session[5912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:18:31.000679 kernel: audit: type=1101 audit(1765563510.994:869): pid=5912 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:30.996000 audit[5912]: CRED_ACQ pid=5912 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:31.008683 kernel: audit: type=1103 audit(1765563510.996:870): pid=5912 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:31.008854 systemd-logind[1824]: New session 23 of user core. Dec 12 18:18:30.996000 audit[5912]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff319a1370 a2=3 a3=0 items=0 ppid=1 pid=5912 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:31.016735 kernel: audit: type=1006 audit(1765563510.996:871): pid=5912 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 12 18:18:31.017099 kernel: audit: type=1300 audit(1765563510.996:871): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff319a1370 a2=3 a3=0 items=0 ppid=1 pid=5912 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:31.017898 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 18:18:30.996000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:18:31.020315 kernel: audit: type=1327 audit(1765563510.996:871): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:18:31.020000 audit[5912]: USER_START pid=5912 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:31.023436 kernel: audit: type=1105 audit(1765563511.020:872): pid=5912 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:31.022000 audit[5915]: CRED_ACQ pid=5915 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:31.027705 kernel: audit: type=1103 audit(1765563511.022:873): pid=5915 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:31.526853 sshd[5915]: Connection closed by 139.178.89.65 port 36464 Dec 12 18:18:31.527172 sshd-session[5912]: pam_unix(sshd:session): session closed for user core Dec 12 18:18:31.527000 audit[5912]: USER_END pid=5912 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:31.530692 systemd[1]: sshd@22-10.0.8.19:22-139.178.89.65:36464.service: Deactivated successfully. Dec 12 18:18:31.532272 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 18:18:31.532984 systemd-logind[1824]: Session 23 logged out. Waiting for processes to exit. Dec 12 18:18:31.533851 systemd-logind[1824]: Removed session 23. Dec 12 18:18:31.527000 audit[5912]: CRED_DISP pid=5912 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:31.537504 kernel: audit: type=1106 audit(1765563511.527:874): pid=5912 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:31.537590 kernel: audit: type=1104 audit(1765563511.527:875): pid=5912 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:31.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.8.19:22-139.178.89.65:36464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:36.267921 kubelet[3114]: E1212 18:18:36.267869 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-2p5wd" podUID="d5affb4a-a5c2-4140-9517-3b93721ff225" Dec 12 18:18:36.268644 kubelet[3114]: E1212 18:18:36.268610 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7cbd858cfd-bgn9w" podUID="dc52e510-0fd6-4a21-9182-a1df4018bab8" Dec 12 18:18:36.697406 systemd[1]: Started sshd@23-10.0.8.19:22-139.178.89.65:52014.service - OpenSSH per-connection server daemon (139.178.89.65:52014). Dec 12 18:18:36.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.8.19:22-139.178.89.65:52014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:36.698298 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:18:36.698347 kernel: audit: type=1130 audit(1765563516.696:877): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.8.19:22-139.178.89.65:52014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:37.509000 audit[5934]: USER_ACCT pid=5934 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:37.509909 sshd[5934]: Accepted publickey for core from 139.178.89.65 port 52014 ssh2: RSA SHA256:P03RT4f8kIelvaSNmaAKLm8NQmzoiYdI93bUyqHqPCw Dec 12 18:18:37.510879 sshd-session[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:18:37.514679 kernel: audit: type=1101 audit(1765563517.509:878): pid=5934 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:37.514802 kernel: audit: type=1103 audit(1765563517.509:879): pid=5934 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:37.509000 audit[5934]: CRED_ACQ pid=5934 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:37.518762 kernel: audit: type=1006 audit(1765563517.509:880): pid=5934 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 12 18:18:37.518544 systemd-logind[1824]: New session 24 of user core. Dec 12 18:18:37.509000 audit[5934]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffeb539010 a2=3 a3=0 items=0 ppid=1 pid=5934 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:37.523131 kernel: audit: type=1300 audit(1765563517.509:880): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffeb539010 a2=3 a3=0 items=0 ppid=1 pid=5934 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:37.509000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:18:37.527352 kernel: audit: type=1327 audit(1765563517.509:880): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:18:37.533867 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 18:18:37.535000 audit[5934]: USER_START pid=5934 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:37.537000 audit[5937]: CRED_ACQ pid=5937 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:37.542480 kernel: audit: type=1105 audit(1765563517.535:881): pid=5934 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:37.542558 kernel: audit: type=1103 audit(1765563517.537:882): pid=5937 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:38.030803 sshd[5937]: Connection closed by 139.178.89.65 port 52014 Dec 12 18:18:38.031163 sshd-session[5934]: pam_unix(sshd:session): session closed for user core Dec 12 18:18:38.032000 audit[5934]: USER_END pid=5934 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:38.035564 systemd[1]: sshd@23-10.0.8.19:22-139.178.89.65:52014.service: Deactivated successfully. Dec 12 18:18:38.032000 audit[5934]: CRED_DISP pid=5934 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:38.038228 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 18:18:38.039124 kernel: audit: type=1106 audit(1765563518.032:883): pid=5934 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:38.039172 kernel: audit: type=1104 audit(1765563518.032:884): pid=5934 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:18:38.039368 systemd-logind[1824]: Session 24 logged out. Waiting for processes to exit. Dec 12 18:18:38.040872 systemd-logind[1824]: Removed session 24. Dec 12 18:18:38.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.8.19:22-139.178.89.65:52014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:40.266149 containerd[1848]: time="2025-12-12T18:18:40.266105665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:18:40.637160 containerd[1848]: time="2025-12-12T18:18:40.637089847Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:18:40.638713 containerd[1848]: time="2025-12-12T18:18:40.638648404Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:18:40.638790 containerd[1848]: time="2025-12-12T18:18:40.638737747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:18:40.638928 kubelet[3114]: E1212 18:18:40.638881 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:18:40.639210 kubelet[3114]: E1212 18:18:40.638935 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:18:40.639210 kubelet[3114]: E1212 18:18:40.639073 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2cfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5447cc8774-bt5vn_calico-system(e457a45d-7eaa-42e2-95fa-b7011451de77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:18:40.640316 kubelet[3114]: E1212 18:18:40.640255 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5447cc8774-bt5vn" podUID="e457a45d-7eaa-42e2-95fa-b7011451de77" Dec 12 18:18:41.267030 containerd[1848]: time="2025-12-12T18:18:41.266971364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:18:41.494974 update_engine[1829]: I20251212 18:18:41.494904 1829 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 12 18:18:41.494974 update_engine[1829]: I20251212 18:18:41.494959 1829 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 12 18:18:41.495397 update_engine[1829]: I20251212 18:18:41.495143 1829 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 12 18:18:41.495533 update_engine[1829]: I20251212 18:18:41.495507 1829 omaha_request_params.cc:62] Current group set to beta Dec 12 18:18:41.495873 update_engine[1829]: I20251212 18:18:41.495842 1829 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 12 18:18:41.495873 update_engine[1829]: I20251212 18:18:41.495857 1829 update_attempter.cc:643] Scheduling an action processor start. Dec 12 18:18:41.495873 update_engine[1829]: I20251212 18:18:41.495872 1829 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 12 18:18:41.495968 update_engine[1829]: I20251212 18:18:41.495905 1829 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 12 18:18:41.495968 update_engine[1829]: I20251212 18:18:41.495951 1829 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 12 18:18:41.495968 update_engine[1829]: I20251212 18:18:41.495957 1829 omaha_request_action.cc:272] Request: Dec 12 18:18:41.495968 update_engine[1829]: Dec 12 18:18:41.495968 update_engine[1829]: Dec 12 18:18:41.495968 update_engine[1829]: Dec 12 18:18:41.495968 update_engine[1829]: Dec 12 18:18:41.495968 update_engine[1829]: Dec 12 18:18:41.495968 update_engine[1829]: Dec 12 18:18:41.495968 update_engine[1829]: Dec 12 18:18:41.495968 update_engine[1829]: Dec 12 18:18:41.495968 update_engine[1829]: I20251212 18:18:41.495963 1829 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 18:18:41.497542 update_engine[1829]: I20251212 18:18:41.496990 1829 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 18:18:41.497542 update_engine[1829]: I20251212 18:18:41.497458 1829 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 18:18:41.497826 locksmithd[1873]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 12 18:18:41.503351 update_engine[1829]: E20251212 18:18:41.503289 1829 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 12 18:18:41.503554 update_engine[1829]: I20251212 18:18:41.503376 1829 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 12 18:18:41.626582 containerd[1848]: time="2025-12-12T18:18:41.626477668Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:18:41.628351 containerd[1848]: time="2025-12-12T18:18:41.628283652Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:18:41.628524 containerd[1848]: time="2025-12-12T18:18:41.628380874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:18:41.628623 kubelet[3114]: E1212 18:18:41.628575 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:18:41.628703 kubelet[3114]: E1212 18:18:41.628629 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:18:41.628837 kubelet[3114]: E1212 18:18:41.628783 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wxhcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4vrl2_calico-system(a83fad8b-d566-4d95-b74e-3a16ee22e614): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:18:41.630253 kubelet[3114]: E1212 18:18:41.630199 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vrl2" podUID="a83fad8b-d566-4d95-b74e-3a16ee22e614" Dec 12 18:18:42.268425 containerd[1848]: time="2025-12-12T18:18:42.268370840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:18:42.817353 containerd[1848]: time="2025-12-12T18:18:42.817244365Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:18:42.819259 containerd[1848]: time="2025-12-12T18:18:42.819207391Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:18:42.819375 containerd[1848]: time="2025-12-12T18:18:42.819241376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:18:42.819509 kubelet[3114]: E1212 18:18:42.819461 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:18:42.820548 kubelet[3114]: E1212 18:18:42.819518 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:18:42.820548 kubelet[3114]: E1212 18:18:42.819650 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nsbhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9klxk_calico-system(4cc6cae7-6092-4840-b2c3-065b3bb220f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:18:42.821541 kubelet[3114]: E1212 18:18:42.821506 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9klxk" podUID="4cc6cae7-6092-4840-b2c3-065b3bb220f3" Dec 12 18:18:44.268568 kubelet[3114]: E1212 18:18:44.268524 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-mwbp6" podUID="fffa851a-7d3a-4af7-80b6-6f040212a19b" Dec 12 18:18:49.266356 containerd[1848]: time="2025-12-12T18:18:49.266220727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:18:49.578417 containerd[1848]: time="2025-12-12T18:18:49.578245466Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:18:49.580521 containerd[1848]: time="2025-12-12T18:18:49.580443799Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:18:49.580643 containerd[1848]: time="2025-12-12T18:18:49.580548414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:18:49.580796 kubelet[3114]: E1212 18:18:49.580738 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:18:49.581080 kubelet[3114]: E1212 18:18:49.580802 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:18:49.581080 kubelet[3114]: E1212 18:18:49.580976 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8hc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-646c8584fc-2p5wd_calico-apiserver(d5affb4a-a5c2-4140-9517-3b93721ff225): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:18:49.582230 kubelet[3114]: E1212 18:18:49.582173 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-2p5wd" podUID="d5affb4a-a5c2-4140-9517-3b93721ff225" Dec 12 18:18:51.265905 containerd[1848]: time="2025-12-12T18:18:51.265869752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:18:51.497145 update_engine[1829]: I20251212 18:18:51.497048 1829 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 18:18:51.497145 update_engine[1829]: I20251212 18:18:51.497138 1829 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 18:18:51.497714 update_engine[1829]: I20251212 18:18:51.497443 1829 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 18:18:51.504143 update_engine[1829]: E20251212 18:18:51.504066 1829 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 12 18:18:51.504247 update_engine[1829]: I20251212 18:18:51.504149 1829 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 12 18:18:51.596761 containerd[1848]: time="2025-12-12T18:18:51.596597359Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:18:51.598763 containerd[1848]: time="2025-12-12T18:18:51.598700592Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:18:51.598884 containerd[1848]: time="2025-12-12T18:18:51.598734609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:18:51.599016 kubelet[3114]: E1212 18:18:51.598965 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:18:51.599430 kubelet[3114]: E1212 18:18:51.599029 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:18:51.599598 kubelet[3114]: E1212 18:18:51.599507 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:32f7d598b8bd491099e853ab2d7a3772,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-svwkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cbd858cfd-bgn9w_calico-system(dc52e510-0fd6-4a21-9182-a1df4018bab8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:18:51.601511 containerd[1848]: time="2025-12-12T18:18:51.601491267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:18:51.924292 containerd[1848]: time="2025-12-12T18:18:51.924187442Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:18:51.926033 containerd[1848]: time="2025-12-12T18:18:51.925943476Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:18:51.926033 containerd[1848]: time="2025-12-12T18:18:51.926000746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:18:51.926260 kubelet[3114]: E1212 18:18:51.926208 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:18:51.926306 kubelet[3114]: E1212 18:18:51.926268 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:18:51.926542 kubelet[3114]: E1212 18:18:51.926420 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-svwkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cbd858cfd-bgn9w_calico-system(dc52e510-0fd6-4a21-9182-a1df4018bab8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:18:51.927719 kubelet[3114]: E1212 18:18:51.927642 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7cbd858cfd-bgn9w" podUID="dc52e510-0fd6-4a21-9182-a1df4018bab8" Dec 12 18:18:52.266626 kubelet[3114]: E1212 18:18:52.266501 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5447cc8774-bt5vn" podUID="e457a45d-7eaa-42e2-95fa-b7011451de77" Dec 12 18:18:54.265997 kubelet[3114]: E1212 18:18:54.265883 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vrl2" podUID="a83fad8b-d566-4d95-b74e-3a16ee22e614" Dec 12 18:18:54.267601 containerd[1848]: time="2025-12-12T18:18:54.266825235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:18:54.622320 containerd[1848]: time="2025-12-12T18:18:54.622215227Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:18:54.624265 containerd[1848]: time="2025-12-12T18:18:54.624090420Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:18:54.624265 containerd[1848]: time="2025-12-12T18:18:54.624156754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:18:54.624633 kubelet[3114]: E1212 18:18:54.624543 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:18:54.624747 kubelet[3114]: E1212 18:18:54.624634 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:18:54.625137 kubelet[3114]: E1212 18:18:54.625070 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nsbhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9klxk_calico-system(4cc6cae7-6092-4840-b2c3-065b3bb220f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:18:54.626409 kubelet[3114]: E1212 18:18:54.626344 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9klxk" podUID="4cc6cae7-6092-4840-b2c3-065b3bb220f3" Dec 12 18:18:55.265721 containerd[1848]: time="2025-12-12T18:18:55.265655034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:18:55.590318 containerd[1848]: time="2025-12-12T18:18:55.590121452Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:18:55.592395 containerd[1848]: time="2025-12-12T18:18:55.592354869Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:18:55.592481 containerd[1848]: time="2025-12-12T18:18:55.592450238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:18:55.592656 kubelet[3114]: E1212 18:18:55.592613 3114 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:18:55.592855 kubelet[3114]: E1212 18:18:55.592681 3114 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:18:55.592881 kubelet[3114]: E1212 18:18:55.592831 3114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxfb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-646c8584fc-mwbp6_calico-apiserver(fffa851a-7d3a-4af7-80b6-6f040212a19b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:18:55.594085 kubelet[3114]: E1212 18:18:55.594030 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-mwbp6" podUID="fffa851a-7d3a-4af7-80b6-6f040212a19b" Dec 12 18:19:01.494086 update_engine[1829]: I20251212 18:19:01.493960 1829 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 18:19:01.494086 update_engine[1829]: I20251212 18:19:01.494088 1829 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 18:19:01.494621 update_engine[1829]: I20251212 18:19:01.494585 1829 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 18:19:01.503047 update_engine[1829]: E20251212 18:19:01.502964 1829 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 12 18:19:01.503164 update_engine[1829]: I20251212 18:19:01.503091 1829 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 12 18:19:04.266300 kubelet[3114]: E1212 18:19:04.266257 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-2p5wd" podUID="d5affb4a-a5c2-4140-9517-3b93721ff225" Dec 12 18:19:04.399096 systemd[1]: cri-containerd-de038c74656a5616e8a5f1b4bfdf4c835a5347cf67e838a46214f96d2e8fbd34.scope: Deactivated successfully. Dec 12 18:19:04.399419 systemd[1]: cri-containerd-de038c74656a5616e8a5f1b4bfdf4c835a5347cf67e838a46214f96d2e8fbd34.scope: Consumed 4.023s CPU time, 66.4M memory peak. Dec 12 18:19:04.399000 audit: BPF prog-id=256 op=LOAD Dec 12 18:19:04.400799 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:19:04.401278 kernel: audit: type=1334 audit(1765563544.399:886): prog-id=256 op=LOAD Dec 12 18:19:04.401748 containerd[1848]: time="2025-12-12T18:19:04.401710315Z" level=info msg="received container exit event container_id:\"de038c74656a5616e8a5f1b4bfdf4c835a5347cf67e838a46214f96d2e8fbd34\" id:\"de038c74656a5616e8a5f1b4bfdf4c835a5347cf67e838a46214f96d2e8fbd34\" pid:2941 exit_status:1 exited_at:{seconds:1765563544 nanos:401165189}" Dec 12 18:19:04.399000 audit: BPF prog-id=88 op=UNLOAD Dec 12 18:19:04.404194 kernel: audit: type=1334 audit(1765563544.399:887): prog-id=88 op=UNLOAD Dec 12 18:19:04.410000 audit: BPF prog-id=108 op=UNLOAD Dec 12 18:19:04.410000 audit: BPF prog-id=112 op=UNLOAD Dec 12 18:19:04.412948 kernel: audit: type=1334 audit(1765563544.410:888): prog-id=108 op=UNLOAD Dec 12 18:19:04.413088 kernel: audit: type=1334 audit(1765563544.410:889): prog-id=112 op=UNLOAD Dec 12 18:19:04.427390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de038c74656a5616e8a5f1b4bfdf4c835a5347cf67e838a46214f96d2e8fbd34-rootfs.mount: Deactivated successfully. Dec 12 18:19:04.646397 systemd[1]: cri-containerd-0c5e141b4fdc71f2b2f41be01bcb61599c09f6d2daadc4908a8124fafe112e0b.scope: Deactivated successfully. Dec 12 18:19:04.646898 systemd[1]: cri-containerd-0c5e141b4fdc71f2b2f41be01bcb61599c09f6d2daadc4908a8124fafe112e0b.scope: Consumed 29.535s CPU time, 109.9M memory peak. Dec 12 18:19:04.647822 containerd[1848]: time="2025-12-12T18:19:04.647577626Z" level=info msg="received container exit event container_id:\"0c5e141b4fdc71f2b2f41be01bcb61599c09f6d2daadc4908a8124fafe112e0b\" id:\"0c5e141b4fdc71f2b2f41be01bcb61599c09f6d2daadc4908a8124fafe112e0b\" pid:3478 exit_status:1 exited_at:{seconds:1765563544 nanos:647299225}" Dec 12 18:19:04.660000 audit: BPF prog-id=146 op=UNLOAD Dec 12 18:19:04.660000 audit: BPF prog-id=150 op=UNLOAD Dec 12 18:19:04.663312 kernel: audit: type=1334 audit(1765563544.660:890): prog-id=146 op=UNLOAD Dec 12 18:19:04.663464 kernel: audit: type=1334 audit(1765563544.660:891): prog-id=150 op=UNLOAD Dec 12 18:19:04.670844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c5e141b4fdc71f2b2f41be01bcb61599c09f6d2daadc4908a8124fafe112e0b-rootfs.mount: Deactivated successfully. Dec 12 18:19:04.696858 kubelet[3114]: I1212 18:19:04.696796 3114 status_manager.go:895] "Failed to get status for pod" podUID="a83fad8b-d566-4d95-b74e-3a16ee22e614" pod="calico-system/goldmane-666569f655-4vrl2" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.8.19:53598->10.0.8.11:2379: read: connection timed out" Dec 12 18:19:04.696858 kubelet[3114]: E1212 18:19:04.696769 3114 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.8.19:53468->10.0.8.11:2379: read: connection timed out" event="&Event{ObjectMeta:{goldmane-666569f655-4vrl2.18808a8435eac82d calico-system 1722 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-4vrl2,UID:a83fad8b-d566-4d95-b74e-3a16ee22e614,APIVersion:v1,ResourceVersion:808,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4515-1-0-e-14f87f00b0,},FirstTimestamp:2025-12-12 18:15:54 +0000 UTC,LastTimestamp:2025-12-12 18:18:54.265813632 +0000 UTC m=+228.078336754,Count:11,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-e-14f87f00b0,}" Dec 12 18:19:04.739052 kubelet[3114]: I1212 18:19:04.739010 3114 scope.go:117] "RemoveContainer" containerID="de038c74656a5616e8a5f1b4bfdf4c835a5347cf67e838a46214f96d2e8fbd34" Dec 12 18:19:04.740308 containerd[1848]: time="2025-12-12T18:19:04.740247391Z" level=info msg="CreateContainer within sandbox \"71112cd8a7cac7c3e43910dbacff6282d2abca1916d96d8118e763342c8109ac\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 12 18:19:04.740440 kubelet[3114]: I1212 18:19:04.740265 3114 scope.go:117] "RemoveContainer" containerID="0c5e141b4fdc71f2b2f41be01bcb61599c09f6d2daadc4908a8124fafe112e0b" Dec 12 18:19:04.741336 containerd[1848]: time="2025-12-12T18:19:04.741304828Z" level=info msg="CreateContainer within sandbox \"112724f19b8143a22ebb263b3856f10942439a24f55c1ff4612e66d05c93cbfb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 12 18:19:04.752849 containerd[1848]: time="2025-12-12T18:19:04.752779274Z" level=info msg="Container 91ed499675343ce790b7227903d76ac8e7cb2de4e13b8934ee7092ce8a21971c: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:19:04.761311 containerd[1848]: time="2025-12-12T18:19:04.761186195Z" level=info msg="Container 89c1da2ecb42955af4c5be82e8e56c3d4ddb9504c4715c9a414cf83db081422f: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:19:04.765539 containerd[1848]: time="2025-12-12T18:19:04.765498264Z" level=info msg="CreateContainer within sandbox \"71112cd8a7cac7c3e43910dbacff6282d2abca1916d96d8118e763342c8109ac\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"91ed499675343ce790b7227903d76ac8e7cb2de4e13b8934ee7092ce8a21971c\"" Dec 12 18:19:04.765961 containerd[1848]: time="2025-12-12T18:19:04.765935900Z" level=info msg="StartContainer for \"91ed499675343ce790b7227903d76ac8e7cb2de4e13b8934ee7092ce8a21971c\"" Dec 12 18:19:04.766820 containerd[1848]: time="2025-12-12T18:19:04.766799857Z" level=info msg="connecting to shim 91ed499675343ce790b7227903d76ac8e7cb2de4e13b8934ee7092ce8a21971c" address="unix:///run/containerd/s/778cf22491b9c224589e8c64aaa5bcdd3fa288119afb8fceb4e77eb33d6fb548" protocol=ttrpc version=3 Dec 12 18:19:04.771934 containerd[1848]: time="2025-12-12T18:19:04.771902996Z" level=info msg="CreateContainer within sandbox \"112724f19b8143a22ebb263b3856f10942439a24f55c1ff4612e66d05c93cbfb\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"89c1da2ecb42955af4c5be82e8e56c3d4ddb9504c4715c9a414cf83db081422f\"" Dec 12 18:19:04.772613 containerd[1848]: time="2025-12-12T18:19:04.772588103Z" level=info msg="StartContainer for \"89c1da2ecb42955af4c5be82e8e56c3d4ddb9504c4715c9a414cf83db081422f\"" Dec 12 18:19:04.773274 containerd[1848]: time="2025-12-12T18:19:04.773247999Z" level=info msg="connecting to shim 89c1da2ecb42955af4c5be82e8e56c3d4ddb9504c4715c9a414cf83db081422f" address="unix:///run/containerd/s/066131dd5b783385e45471e969109d2f93ff2be3bbc1d4b2293204d4e2d07319" protocol=ttrpc version=3 Dec 12 18:19:04.787918 systemd[1]: Started cri-containerd-91ed499675343ce790b7227903d76ac8e7cb2de4e13b8934ee7092ce8a21971c.scope - libcontainer container 91ed499675343ce790b7227903d76ac8e7cb2de4e13b8934ee7092ce8a21971c. Dec 12 18:19:04.791180 systemd[1]: Started cri-containerd-89c1da2ecb42955af4c5be82e8e56c3d4ddb9504c4715c9a414cf83db081422f.scope - libcontainer container 89c1da2ecb42955af4c5be82e8e56c3d4ddb9504c4715c9a414cf83db081422f. Dec 12 18:19:04.801000 audit: BPF prog-id=257 op=LOAD Dec 12 18:19:04.801000 audit: BPF prog-id=258 op=LOAD Dec 12 18:19:04.803761 kernel: audit: type=1334 audit(1765563544.801:892): prog-id=257 op=LOAD Dec 12 18:19:04.803820 kernel: audit: type=1334 audit(1765563544.801:893): prog-id=258 op=LOAD Dec 12 18:19:04.801000 audit[6033]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3291 pid=6033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:04.806325 kernel: audit: type=1300 audit(1765563544.801:893): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3291 pid=6033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:04.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839633164613265636234323935356166346335626538326538653536 Dec 12 18:19:04.811054 kernel: audit: type=1327 audit(1765563544.801:893): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839633164613265636234323935356166346335626538326538653536 Dec 12 18:19:04.801000 audit: BPF prog-id=258 op=UNLOAD Dec 12 18:19:04.801000 audit[6033]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3291 pid=6033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:04.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839633164613265636234323935356166346335626538326538653536 Dec 12 18:19:04.801000 audit: BPF prog-id=259 op=LOAD Dec 12 18:19:04.801000 audit[6033]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3291 pid=6033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:04.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839633164613265636234323935356166346335626538326538653536 Dec 12 18:19:04.801000 audit: BPF prog-id=260 op=LOAD Dec 12 18:19:04.801000 audit[6033]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3291 pid=6033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:04.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839633164613265636234323935356166346335626538326538653536 Dec 12 18:19:04.801000 audit: BPF prog-id=260 op=UNLOAD Dec 12 18:19:04.801000 audit[6033]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3291 pid=6033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:04.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839633164613265636234323935356166346335626538326538653536 Dec 12 18:19:04.801000 audit: BPF prog-id=259 op=UNLOAD Dec 12 18:19:04.801000 audit[6033]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3291 pid=6033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:04.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839633164613265636234323935356166346335626538326538653536 Dec 12 18:19:04.801000 audit: BPF prog-id=261 op=LOAD Dec 12 18:19:04.801000 audit[6033]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3291 pid=6033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:04.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839633164613265636234323935356166346335626538326538653536 Dec 12 18:19:04.802000 audit: BPF prog-id=262 op=LOAD Dec 12 18:19:04.803000 audit: BPF prog-id=263 op=LOAD Dec 12 18:19:04.803000 audit[6022]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2813 pid=6022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:04.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931656434393936373533343363653739306237323237393033643736 Dec 12 18:19:04.803000 audit: BPF prog-id=263 op=UNLOAD Dec 12 18:19:04.803000 audit[6022]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2813 pid=6022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:04.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931656434393936373533343363653739306237323237393033643736 Dec 12 18:19:04.803000 audit: BPF prog-id=264 op=LOAD Dec 12 18:19:04.803000 audit[6022]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2813 pid=6022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:04.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931656434393936373533343363653739306237323237393033643736 Dec 12 18:19:04.804000 audit: BPF prog-id=265 op=LOAD Dec 12 18:19:04.804000 audit[6022]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2813 pid=6022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:04.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931656434393936373533343363653739306237323237393033643736 Dec 12 18:19:04.804000 audit: BPF prog-id=265 op=UNLOAD Dec 12 18:19:04.804000 audit[6022]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2813 pid=6022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:04.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931656434393936373533343363653739306237323237393033643736 Dec 12 18:19:04.804000 audit: BPF prog-id=264 op=UNLOAD Dec 12 18:19:04.804000 audit[6022]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2813 pid=6022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:04.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931656434393936373533343363653739306237323237393033643736 Dec 12 18:19:04.804000 audit: BPF prog-id=266 op=LOAD Dec 12 18:19:04.804000 audit[6022]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2813 pid=6022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:04.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931656434393936373533343363653739306237323237393033643736 Dec 12 18:19:04.826678 kubelet[3114]: E1212 18:19:04.826376 3114 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.8.19:53700->10.0.8.11:2379: read: connection timed out" Dec 12 18:19:04.827918 containerd[1848]: time="2025-12-12T18:19:04.827889811Z" level=info msg="StartContainer for \"89c1da2ecb42955af4c5be82e8e56c3d4ddb9504c4715c9a414cf83db081422f\" returns successfully" Dec 12 18:19:04.850814 containerd[1848]: time="2025-12-12T18:19:04.850779471Z" level=info msg="StartContainer for \"91ed499675343ce790b7227903d76ac8e7cb2de4e13b8934ee7092ce8a21971c\" returns successfully" Dec 12 18:19:05.266345 kubelet[3114]: E1212 18:19:05.266274 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vrl2" podUID="a83fad8b-d566-4d95-b74e-3a16ee22e614" Dec 12 18:19:06.266990 kubelet[3114]: E1212 18:19:06.266946 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9klxk" podUID="4cc6cae7-6092-4840-b2c3-065b3bb220f3" Dec 12 18:19:07.266473 kubelet[3114]: E1212 18:19:07.266375 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-mwbp6" podUID="fffa851a-7d3a-4af7-80b6-6f040212a19b" Dec 12 18:19:07.266715 kubelet[3114]: E1212 18:19:07.266479 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5447cc8774-bt5vn" podUID="e457a45d-7eaa-42e2-95fa-b7011451de77" Dec 12 18:19:07.266715 kubelet[3114]: E1212 18:19:07.266654 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7cbd858cfd-bgn9w" podUID="dc52e510-0fd6-4a21-9182-a1df4018bab8" Dec 12 18:19:10.194376 systemd[1]: cri-containerd-60cf9e3b4e130cd98aa12616d5d9d20966b6a3cc13b6ff21855a3621aa679338.scope: Deactivated successfully. Dec 12 18:19:10.195114 systemd[1]: cri-containerd-60cf9e3b4e130cd98aa12616d5d9d20966b6a3cc13b6ff21855a3621aa679338.scope: Consumed 2.613s CPU time, 24.1M memory peak. Dec 12 18:19:10.195000 audit: BPF prog-id=267 op=LOAD Dec 12 18:19:10.196394 containerd[1848]: time="2025-12-12T18:19:10.196331356Z" level=info msg="received container exit event container_id:\"60cf9e3b4e130cd98aa12616d5d9d20966b6a3cc13b6ff21855a3621aa679338\" id:\"60cf9e3b4e130cd98aa12616d5d9d20966b6a3cc13b6ff21855a3621aa679338\" pid:2948 exit_status:1 exited_at:{seconds:1765563550 nanos:195877334}" Dec 12 18:19:10.196804 kernel: kauditd_printk_skb: 40 callbacks suppressed Dec 12 18:19:10.196860 kernel: audit: type=1334 audit(1765563550.195:908): prog-id=267 op=LOAD Dec 12 18:19:10.195000 audit: BPF prog-id=93 op=UNLOAD Dec 12 18:19:10.200502 kernel: audit: type=1334 audit(1765563550.195:909): prog-id=93 op=UNLOAD Dec 12 18:19:10.202000 audit: BPF prog-id=103 op=UNLOAD Dec 12 18:19:10.202000 audit: BPF prog-id=107 op=UNLOAD Dec 12 18:19:10.205190 kernel: audit: type=1334 audit(1765563550.202:910): prog-id=103 op=UNLOAD Dec 12 18:19:10.205271 kernel: audit: type=1334 audit(1765563550.202:911): prog-id=107 op=UNLOAD Dec 12 18:19:10.223720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60cf9e3b4e130cd98aa12616d5d9d20966b6a3cc13b6ff21855a3621aa679338-rootfs.mount: Deactivated successfully. Dec 12 18:19:10.757754 kubelet[3114]: I1212 18:19:10.757696 3114 scope.go:117] "RemoveContainer" containerID="60cf9e3b4e130cd98aa12616d5d9d20966b6a3cc13b6ff21855a3621aa679338" Dec 12 18:19:10.759053 containerd[1848]: time="2025-12-12T18:19:10.759008164Z" level=info msg="CreateContainer within sandbox \"97585d37b9464f3bd753eeafab9c79cbfacedb7632826297f9de2cfd02452544\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 12 18:19:10.770413 containerd[1848]: time="2025-12-12T18:19:10.769652829Z" level=info msg="Container ac0ef98630b5d041daa3f1ffdd45bed9f75e52f3564e309fa9bdf4fde4d5debd: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:19:10.778854 containerd[1848]: time="2025-12-12T18:19:10.778794338Z" level=info msg="CreateContainer within sandbox \"97585d37b9464f3bd753eeafab9c79cbfacedb7632826297f9de2cfd02452544\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ac0ef98630b5d041daa3f1ffdd45bed9f75e52f3564e309fa9bdf4fde4d5debd\"" Dec 12 18:19:10.779244 containerd[1848]: time="2025-12-12T18:19:10.779220406Z" level=info msg="StartContainer for \"ac0ef98630b5d041daa3f1ffdd45bed9f75e52f3564e309fa9bdf4fde4d5debd\"" Dec 12 18:19:10.780107 containerd[1848]: time="2025-12-12T18:19:10.780080938Z" level=info msg="connecting to shim ac0ef98630b5d041daa3f1ffdd45bed9f75e52f3564e309fa9bdf4fde4d5debd" address="unix:///run/containerd/s/6c65586a96ec9353c3e6fa6312cce20dde85b8d75bb9fd0789717e559a1874bf" protocol=ttrpc version=3 Dec 12 18:19:10.802933 systemd[1]: Started cri-containerd-ac0ef98630b5d041daa3f1ffdd45bed9f75e52f3564e309fa9bdf4fde4d5debd.scope - libcontainer container ac0ef98630b5d041daa3f1ffdd45bed9f75e52f3564e309fa9bdf4fde4d5debd. Dec 12 18:19:10.813000 audit: BPF prog-id=268 op=LOAD Dec 12 18:19:10.813000 audit: BPF prog-id=269 op=LOAD Dec 12 18:19:10.816047 kernel: audit: type=1334 audit(1765563550.813:912): prog-id=268 op=LOAD Dec 12 18:19:10.816184 kernel: audit: type=1334 audit(1765563550.813:913): prog-id=269 op=LOAD Dec 12 18:19:10.813000 audit[6116]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2810 pid=6116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:10.818372 kernel: audit: type=1300 audit(1765563550.813:913): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2810 pid=6116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:10.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163306566393836333062356430343164616133663166666464343562 Dec 12 18:19:10.822729 kernel: audit: type=1327 audit(1765563550.813:913): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163306566393836333062356430343164616133663166666464343562 Dec 12 18:19:10.813000 audit: BPF prog-id=269 op=UNLOAD Dec 12 18:19:10.826025 kernel: audit: type=1334 audit(1765563550.813:914): prog-id=269 op=UNLOAD Dec 12 18:19:10.813000 audit[6116]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2810 pid=6116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:10.828068 kernel: audit: type=1300 audit(1765563550.813:914): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2810 pid=6116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:10.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163306566393836333062356430343164616133663166666464343562 Dec 12 18:19:10.813000 audit: BPF prog-id=270 op=LOAD Dec 12 18:19:10.813000 audit[6116]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2810 pid=6116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:10.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163306566393836333062356430343164616133663166666464343562 Dec 12 18:19:10.813000 audit: BPF prog-id=271 op=LOAD Dec 12 18:19:10.813000 audit[6116]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2810 pid=6116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:10.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163306566393836333062356430343164616133663166666464343562 Dec 12 18:19:10.814000 audit: BPF prog-id=271 op=UNLOAD Dec 12 18:19:10.814000 audit[6116]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2810 pid=6116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:10.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163306566393836333062356430343164616133663166666464343562 Dec 12 18:19:10.814000 audit: BPF prog-id=270 op=UNLOAD Dec 12 18:19:10.814000 audit[6116]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2810 pid=6116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:10.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163306566393836333062356430343164616133663166666464343562 Dec 12 18:19:10.814000 audit: BPF prog-id=272 op=LOAD Dec 12 18:19:10.814000 audit[6116]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2810 pid=6116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:10.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163306566393836333062356430343164616133663166666464343562 Dec 12 18:19:10.857154 containerd[1848]: time="2025-12-12T18:19:10.857113939Z" level=info msg="StartContainer for \"ac0ef98630b5d041daa3f1ffdd45bed9f75e52f3564e309fa9bdf4fde4d5debd\" returns successfully" Dec 12 18:19:11.493468 update_engine[1829]: I20251212 18:19:11.493371 1829 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 18:19:11.493468 update_engine[1829]: I20251212 18:19:11.493464 1829 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 18:19:11.493868 update_engine[1829]: I20251212 18:19:11.493814 1829 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 18:19:11.500604 update_engine[1829]: E20251212 18:19:11.500530 1829 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 12 18:19:11.500802 update_engine[1829]: I20251212 18:19:11.500651 1829 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 12 18:19:11.500802 update_engine[1829]: I20251212 18:19:11.500679 1829 omaha_request_action.cc:617] Omaha request response: Dec 12 18:19:11.500802 update_engine[1829]: E20251212 18:19:11.500763 1829 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 12 18:19:11.500802 update_engine[1829]: I20251212 18:19:11.500782 1829 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 12 18:19:11.500802 update_engine[1829]: I20251212 18:19:11.500789 1829 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 12 18:19:11.500802 update_engine[1829]: I20251212 18:19:11.500795 1829 update_attempter.cc:306] Processing Done. Dec 12 18:19:11.500930 update_engine[1829]: E20251212 18:19:11.500811 1829 update_attempter.cc:619] Update failed. Dec 12 18:19:11.500930 update_engine[1829]: I20251212 18:19:11.500818 1829 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 12 18:19:11.500930 update_engine[1829]: I20251212 18:19:11.500824 1829 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 12 18:19:11.500930 update_engine[1829]: I20251212 18:19:11.500831 1829 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 12 18:19:11.500930 update_engine[1829]: I20251212 18:19:11.500907 1829 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 12 18:19:11.501030 update_engine[1829]: I20251212 18:19:11.500933 1829 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 12 18:19:11.501030 update_engine[1829]: I20251212 18:19:11.500941 1829 omaha_request_action.cc:272] Request: Dec 12 18:19:11.501030 update_engine[1829]: Dec 12 18:19:11.501030 update_engine[1829]: Dec 12 18:19:11.501030 update_engine[1829]: Dec 12 18:19:11.501030 update_engine[1829]: Dec 12 18:19:11.501030 update_engine[1829]: Dec 12 18:19:11.501030 update_engine[1829]: Dec 12 18:19:11.501030 update_engine[1829]: I20251212 18:19:11.500948 1829 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 18:19:11.501030 update_engine[1829]: I20251212 18:19:11.500973 1829 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 18:19:11.501317 update_engine[1829]: I20251212 18:19:11.501296 1829 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 18:19:11.501379 locksmithd[1873]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 12 18:19:11.507478 update_engine[1829]: E20251212 18:19:11.507426 1829 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 12 18:19:11.507593 update_engine[1829]: I20251212 18:19:11.507513 1829 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 12 18:19:11.507593 update_engine[1829]: I20251212 18:19:11.507524 1829 omaha_request_action.cc:617] Omaha request response: Dec 12 18:19:11.507593 update_engine[1829]: I20251212 18:19:11.507534 1829 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 12 18:19:11.507593 update_engine[1829]: I20251212 18:19:11.507539 1829 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 12 18:19:11.507593 update_engine[1829]: I20251212 18:19:11.507546 1829 update_attempter.cc:306] Processing Done. Dec 12 18:19:11.507593 update_engine[1829]: I20251212 18:19:11.507553 1829 update_attempter.cc:310] Error event sent. Dec 12 18:19:11.507593 update_engine[1829]: I20251212 18:19:11.507562 1829 update_check_scheduler.cc:74] Next update check in 43m57s Dec 12 18:19:11.507964 locksmithd[1873]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 12 18:19:14.827863 kubelet[3114]: E1212 18:19:14.827823 3114 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ci-4515-1-0-e-14f87f00b0)" Dec 12 18:19:16.026326 systemd[1]: cri-containerd-89c1da2ecb42955af4c5be82e8e56c3d4ddb9504c4715c9a414cf83db081422f.scope: Deactivated successfully. Dec 12 18:19:16.027476 containerd[1848]: time="2025-12-12T18:19:16.027321633Z" level=info msg="received container exit event container_id:\"89c1da2ecb42955af4c5be82e8e56c3d4ddb9504c4715c9a414cf83db081422f\" id:\"89c1da2ecb42955af4c5be82e8e56c3d4ddb9504c4715c9a414cf83db081422f\" pid:6048 exit_status:1 exited_at:{seconds:1765563556 nanos:26786406}" Dec 12 18:19:16.038240 kernel: kauditd_printk_skb: 16 callbacks suppressed Dec 12 18:19:16.038353 kernel: audit: type=1334 audit(1765563556.035:920): prog-id=257 op=UNLOAD Dec 12 18:19:16.035000 audit: BPF prog-id=257 op=UNLOAD Dec 12 18:19:16.038978 kernel: audit: type=1334 audit(1765563556.035:921): prog-id=261 op=UNLOAD Dec 12 18:19:16.035000 audit: BPF prog-id=261 op=UNLOAD Dec 12 18:19:16.047014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89c1da2ecb42955af4c5be82e8e56c3d4ddb9504c4715c9a414cf83db081422f-rootfs.mount: Deactivated successfully. Dec 12 18:19:16.772494 kubelet[3114]: I1212 18:19:16.772438 3114 scope.go:117] "RemoveContainer" containerID="0c5e141b4fdc71f2b2f41be01bcb61599c09f6d2daadc4908a8124fafe112e0b" Dec 12 18:19:16.772930 kubelet[3114]: I1212 18:19:16.772760 3114 scope.go:117] "RemoveContainer" containerID="89c1da2ecb42955af4c5be82e8e56c3d4ddb9504c4715c9a414cf83db081422f" Dec 12 18:19:16.772930 kubelet[3114]: E1212 18:19:16.772902 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-lkdvm_tigera-operator(d8ca68cd-5422-42bf-b546-b6cfae705403)\"" pod="tigera-operator/tigera-operator-7dcd859c48-lkdvm" podUID="d8ca68cd-5422-42bf-b546-b6cfae705403" Dec 12 18:19:16.773978 containerd[1848]: time="2025-12-12T18:19:16.773931863Z" level=info msg="RemoveContainer for \"0c5e141b4fdc71f2b2f41be01bcb61599c09f6d2daadc4908a8124fafe112e0b\"" Dec 12 18:19:16.779205 containerd[1848]: time="2025-12-12T18:19:16.779156936Z" level=info msg="RemoveContainer for \"0c5e141b4fdc71f2b2f41be01bcb61599c09f6d2daadc4908a8124fafe112e0b\" returns successfully" Dec 12 18:19:18.265768 kubelet[3114]: E1212 18:19:18.265709 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vrl2" podUID="a83fad8b-d566-4d95-b74e-3a16ee22e614" Dec 12 18:19:19.266006 kubelet[3114]: E1212 18:19:19.265932 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-2p5wd" podUID="d5affb4a-a5c2-4140-9517-3b93721ff225" Dec 12 18:19:20.267025 kubelet[3114]: E1212 18:19:20.266961 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9klxk" podUID="4cc6cae7-6092-4840-b2c3-065b3bb220f3" Dec 12 18:19:21.265756 kubelet[3114]: E1212 18:19:21.265708 3114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-646c8584fc-mwbp6" podUID="fffa851a-7d3a-4af7-80b6-6f040212a19b"